00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1994 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3260 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.099 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.174 Using shallow fetch with depth 1 00:00:00.174 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.174 > git --version # timeout=10 00:00:00.206 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.089 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.099 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.112 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.112 > git config core.sparsecheckout # timeout=10 00:00:06.121 > git read-tree -mu HEAD # timeout=10 00:00:06.136 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:06.156 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:06.156 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.288 [Pipeline] Start of Pipeline 00:00:06.314 [Pipeline] library 00:00:06.316 Loading library shm_lib@master 00:00:06.316 Library shm_lib@master is cached. Copying from home. 00:00:06.330 [Pipeline] node 00:00:06.338 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.339 [Pipeline] { 00:00:06.347 [Pipeline] catchError 00:00:06.348 [Pipeline] { 00:00:06.359 [Pipeline] wrap 00:00:06.366 [Pipeline] { 00:00:06.372 [Pipeline] stage 00:00:06.373 [Pipeline] { (Prologue) 00:00:06.387 [Pipeline] echo 00:00:06.388 Node: VM-host-SM9 00:00:06.392 [Pipeline] cleanWs 00:00:06.399 [WS-CLEANUP] Deleting project workspace... 00:00:06.399 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.405 [WS-CLEANUP] done 00:00:06.582 [Pipeline] setCustomBuildProperty 00:00:06.661 [Pipeline] httpRequest 00:00:06.693 [Pipeline] echo 00:00:06.696 Sorcerer 10.211.164.101 is alive 00:00:06.715 [Pipeline] httpRequest 00:00:06.724 HttpMethod: GET 00:00:06.725 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.726 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.743 Response Code: HTTP/1.1 200 OK 00:00:06.744 Success: Status code 200 is in the accepted range: 200,404 00:00:06.745 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:11.306 [Pipeline] sh 00:00:11.581 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:11.596 [Pipeline] httpRequest 00:00:11.613 [Pipeline] echo 00:00:11.614 Sorcerer 10.211.164.101 is alive 00:00:11.622 [Pipeline] httpRequest 00:00:11.626 HttpMethod: GET 00:00:11.627 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:11.627 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:11.633 Response Code: HTTP/1.1 200 OK 00:00:11.634 Success: Status code 200 is in the accepted range: 200,404 00:00:11.634 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:02.081 [Pipeline] sh 00:01:02.361 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:05.658 [Pipeline] sh 00:01:05.938 + git -C spdk log --oneline -n5 00:01:05.938 719d03c6a sock/uring: only register net impl if supported 00:01:05.938 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:05.938 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:05.938 6c7c1f57e accel: add sequence outstanding stat 00:01:05.938 3bc8e6a26 accel: add utility to put task 00:01:05.961 [Pipeline] withCredentials 00:01:05.975 > git --version # timeout=10 00:01:05.988 > git --version # 'git version 2.39.2' 00:01:06.007 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:06.009 [Pipeline] { 00:01:06.019 [Pipeline] retry 00:01:06.022 [Pipeline] { 00:01:06.040 [Pipeline] sh 00:01:06.320 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:07.267 [Pipeline] } 00:01:07.287 [Pipeline] // retry 00:01:07.292 [Pipeline] } 00:01:07.311 [Pipeline] // withCredentials 00:01:07.319 [Pipeline] httpRequest 00:01:07.336 [Pipeline] echo 00:01:07.338 Sorcerer 10.211.164.101 is alive 00:01:07.346 [Pipeline] httpRequest 00:01:07.349 HttpMethod: GET 00:01:07.350 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.350 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.358 Response Code: HTTP/1.1 200 OK 00:01:07.359 Success: Status code 200 is in the accepted range: 200,404 00:01:07.359 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:16.646 [Pipeline] sh 00:01:16.923 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.310 [Pipeline] sh 00:01:18.609 + git -C dpdk log --oneline -n5 00:01:18.609 caf0f5d395 version: 22.11.4 00:01:18.609 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:18.609 dc9c799c7d vhost: fix missing spinlock unlock 00:01:18.609 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:18.609 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:18.657 [Pipeline] writeFile 00:01:18.672 [Pipeline] sh 00:01:18.950 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:18.961 [Pipeline] sh 00:01:19.241 + cat autorun-spdk.conf 00:01:19.241 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.241 SPDK_TEST_NVME=1 00:01:19.241 SPDK_TEST_FTL=1 00:01:19.241 SPDK_TEST_ISAL=1 00:01:19.241 SPDK_RUN_ASAN=1 00:01:19.241 SPDK_RUN_UBSAN=1 00:01:19.241 SPDK_TEST_XNVME=1 00:01:19.241 SPDK_TEST_NVME_FDP=1 00:01:19.241 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:19.241 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:19.241 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.247 RUN_NIGHTLY=1 00:01:19.249 [Pipeline] } 00:01:19.265 [Pipeline] // stage 00:01:19.280 [Pipeline] stage 00:01:19.282 [Pipeline] { (Run VM) 00:01:19.296 [Pipeline] sh 00:01:19.574 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:19.574 + echo 'Start stage prepare_nvme.sh' 00:01:19.574 Start stage prepare_nvme.sh 00:01:19.574 + [[ -n 2 ]] 00:01:19.574 + disk_prefix=ex2 00:01:19.574 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:19.574 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:19.574 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:19.574 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.574 ++ SPDK_TEST_NVME=1 00:01:19.574 ++ SPDK_TEST_FTL=1 00:01:19.574 ++ SPDK_TEST_ISAL=1 00:01:19.574 ++ SPDK_RUN_ASAN=1 00:01:19.574 ++ SPDK_RUN_UBSAN=1 00:01:19.574 ++ SPDK_TEST_XNVME=1 00:01:19.574 ++ SPDK_TEST_NVME_FDP=1 00:01:19.574 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:19.574 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:19.574 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.574 ++ RUN_NIGHTLY=1 00:01:19.574 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:19.574 + nvme_files=() 00:01:19.574 + declare -A nvme_files 00:01:19.574 + backend_dir=/var/lib/libvirt/images/backends 00:01:19.574 + nvme_files['nvme.img']=5G 00:01:19.574 + nvme_files['nvme-cmb.img']=5G 00:01:19.574 + nvme_files['nvme-multi0.img']=4G 00:01:19.574 + nvme_files['nvme-multi1.img']=4G 00:01:19.574 + nvme_files['nvme-multi2.img']=4G 00:01:19.574 + nvme_files['nvme-openstack.img']=8G 00:01:19.574 + nvme_files['nvme-zns.img']=5G 00:01:19.574 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:19.574 + (( SPDK_TEST_FTL == 1 )) 00:01:19.574 + nvme_files["nvme-ftl.img"]=6G 00:01:19.574 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:19.574 + nvme_files["nvme-fdp.img"]=1G 00:01:19.574 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:19.574 + for nvme in "${!nvme_files[@]}" 00:01:19.574 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:19.574 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:19.574 + for nvme in "${!nvme_files[@]}" 00:01:19.574 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:19.833 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:19.833 + for nvme in "${!nvme_files[@]}" 00:01:19.833 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:19.833 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.833 + for nvme in "${!nvme_files[@]}" 00:01:19.833 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:20.091 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.091 + for nvme in "${!nvme_files[@]}" 00:01:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:20.091 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.091 + for nvme in "${!nvme_files[@]}" 00:01:20.091 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:20.349 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.349 + for nvme in "${!nvme_files[@]}" 00:01:20.349 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:20.607 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.608 + for nvme in "${!nvme_files[@]}" 00:01:20.608 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:20.608 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:20.608 + for nvme in "${!nvme_files[@]}" 00:01:20.608 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:20.866 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.866 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:20.866 + echo 'End stage prepare_nvme.sh' 00:01:20.866 End stage prepare_nvme.sh 00:01:20.877 [Pipeline] sh 00:01:21.156 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.156 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:21.156 00:01:21.156 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:21.156 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:21.156 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:21.156 HELP=0 00:01:21.156 DRY_RUN=0 00:01:21.156 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:21.156 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:21.156 NVME_AUTO_CREATE=0 00:01:21.156 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:21.156 NVME_CMB=,,,, 00:01:21.156 NVME_PMR=,,,, 00:01:21.156 NVME_ZNS=,,,, 00:01:21.156 NVME_MS=true,,,, 00:01:21.156 NVME_FDP=,,,on, 00:01:21.156 SPDK_VAGRANT_DISTRO=fedora38 00:01:21.156 SPDK_VAGRANT_VMCPU=10 00:01:21.156 SPDK_VAGRANT_VMRAM=12288 00:01:21.156 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.156 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.156 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.156 SPDK_OPENSTACK_NETWORK=0 00:01:21.156 VAGRANT_PACKAGE_BOX=0 00:01:21.156 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.156 FORCE_DISTRO=true 00:01:21.156 VAGRANT_BOX_VERSION= 00:01:21.156 EXTRA_VAGRANTFILES= 00:01:21.156 NIC_MODEL=e1000 00:01:21.156 00:01:21.156 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:21.156 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:24.445 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.445 ==> default: Creating image (snapshot of base box volume). 00:01:24.705 ==> default: Creating domain with the following settings... 00:01:24.705 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720721290_fca76b67ef5e4dee616a 00:01:24.705 ==> default: -- Domain type: kvm 00:01:24.705 ==> default: -- Cpus: 10 00:01:24.705 ==> default: -- Feature: acpi 00:01:24.705 ==> default: -- Feature: apic 00:01:24.705 ==> default: -- Feature: pae 00:01:24.705 ==> default: -- Memory: 12288M 00:01:24.705 ==> default: -- Memory Backing: hugepages: 00:01:24.705 ==> default: -- Management MAC: 00:01:24.705 ==> default: -- Loader: 00:01:24.705 ==> default: -- Nvram: 00:01:24.705 ==> default: -- Base box: spdk/fedora38 00:01:24.705 ==> default: -- Storage pool: default 00:01:24.705 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720721290_fca76b67ef5e4dee616a.img (20G) 00:01:24.705 ==> default: -- Volume Cache: default 00:01:24.705 ==> default: -- Kernel: 00:01:24.705 ==> default: -- Initrd: 00:01:24.705 ==> default: -- Graphics Type: vnc 00:01:24.705 ==> default: -- Graphics Port: -1 00:01:24.705 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.705 ==> default: -- Graphics Password: Not defined 00:01:24.705 ==> default: -- Video Type: cirrus 00:01:24.705 ==> default: -- Video VRAM: 9216 00:01:24.705 ==> default: -- Sound Type: 00:01:24.705 ==> default: -- Keymap: en-us 00:01:24.705 ==> default: -- TPM Path: 00:01:24.705 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.705 ==> default: -- Command line args: 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:24.705 ==> default: -> value=-drive, 00:01:24.705 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:24.705 ==> default: -> value=-drive, 00:01:24.705 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:24.705 ==> default: -> value=-drive, 00:01:24.705 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.705 ==> default: -> value=-drive, 00:01:24.705 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.705 ==> default: -> value=-drive, 00:01:24.705 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:24.705 ==> default: -> value=-drive, 00:01:24.705 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:24.705 ==> default: -> value=-device, 00:01:24.705 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.705 ==> default: Creating shared folders metadata... 00:01:24.705 ==> default: Starting domain. 00:01:26.085 ==> default: Waiting for domain to get an IP address... 00:01:44.168 ==> default: Waiting for SSH to become available... 00:01:44.168 ==> default: Configuring and enabling network interfaces... 00:01:46.071 default: SSH address: 192.168.121.139:22 00:01:46.071 default: SSH username: vagrant 00:01:46.071 default: SSH auth method: private key 00:01:48.669 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.226 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:00.491 ==> default: Mounting SSHFS shared folder... 00:02:02.391 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:02.391 ==> default: Checking Mount.. 00:02:03.361 ==> default: Folder Successfully Mounted! 00:02:03.361 ==> default: Running provisioner: file... 00:02:04.296 default: ~/.gitconfig => .gitconfig 00:02:04.555 00:02:04.555 SUCCESS! 00:02:04.555 00:02:04.555 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:04.555 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:04.555 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:04.555 00:02:04.564 [Pipeline] } 00:02:04.584 [Pipeline] // stage 00:02:04.593 [Pipeline] dir 00:02:04.594 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:02:04.596 [Pipeline] { 00:02:04.610 [Pipeline] catchError 00:02:04.612 [Pipeline] { 00:02:04.626 [Pipeline] sh 00:02:04.906 + vagrant ssh-config --host vagrant 00:02:04.906 + sed -ne /^Host/,$p 00:02:04.906 + tee ssh_conf 00:02:09.096 Host vagrant 00:02:09.096 HostName 192.168.121.139 00:02:09.096 User vagrant 00:02:09.096 Port 22 00:02:09.096 UserKnownHostsFile /dev/null 00:02:09.096 StrictHostKeyChecking no 00:02:09.096 PasswordAuthentication no 00:02:09.096 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:09.096 IdentitiesOnly yes 00:02:09.096 LogLevel FATAL 00:02:09.096 ForwardAgent yes 00:02:09.096 ForwardX11 yes 00:02:09.096 00:02:09.111 [Pipeline] withEnv 00:02:09.113 [Pipeline] { 00:02:09.129 [Pipeline] sh 00:02:09.408 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:09.408 source /etc/os-release 00:02:09.408 [[ -e /image.version ]] && img=$(< /image.version) 00:02:09.408 # Minimal, systemd-like check. 00:02:09.408 if [[ -e /.dockerenv ]]; then 00:02:09.408 # Clear garbage from the node's name: 00:02:09.408 # agt-er_autotest_547-896 -> autotest_547-896 00:02:09.408 # $HOSTNAME is the actual container id 00:02:09.408 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:09.408 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:09.408 # We can assume this is a mount from a host where container is running, 00:02:09.408 # so fetch its hostname to easily identify the target swarm worker. 00:02:09.408 container="$(< /etc/hostname) ($agent)" 00:02:09.408 else 00:02:09.408 # Fallback 00:02:09.408 container=$agent 00:02:09.408 fi 00:02:09.408 fi 00:02:09.408 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:09.408 00:02:09.678 [Pipeline] } 00:02:09.697 [Pipeline] // withEnv 00:02:09.706 [Pipeline] setCustomBuildProperty 00:02:09.721 [Pipeline] stage 00:02:09.724 [Pipeline] { (Tests) 00:02:09.743 [Pipeline] sh 00:02:10.022 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:10.294 [Pipeline] sh 00:02:10.573 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:10.845 [Pipeline] timeout 00:02:10.845 Timeout set to expire in 40 min 00:02:10.847 [Pipeline] { 00:02:10.863 [Pipeline] sh 00:02:11.140 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:11.706 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:11.716 [Pipeline] sh 00:02:11.990 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:12.261 [Pipeline] sh 00:02:12.605 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:12.879 [Pipeline] sh 00:02:13.157 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:13.157 ++ readlink -f spdk_repo 00:02:13.157 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:13.157 + [[ -n /home/vagrant/spdk_repo ]] 00:02:13.157 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:13.157 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:13.157 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:13.157 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:13.157 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:13.157 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:13.157 + cd /home/vagrant/spdk_repo 00:02:13.157 + source /etc/os-release 00:02:13.157 ++ NAME='Fedora Linux' 00:02:13.157 ++ VERSION='38 (Cloud Edition)' 00:02:13.157 ++ ID=fedora 00:02:13.157 ++ VERSION_ID=38 00:02:13.157 ++ VERSION_CODENAME= 00:02:13.157 ++ PLATFORM_ID=platform:f38 00:02:13.157 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:13.157 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:13.157 ++ LOGO=fedora-logo-icon 00:02:13.157 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:13.157 ++ HOME_URL=https://fedoraproject.org/ 00:02:13.157 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:13.157 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:13.157 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:13.157 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:13.157 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:13.157 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:13.157 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:13.157 ++ SUPPORT_END=2024-05-14 00:02:13.157 ++ VARIANT='Cloud Edition' 00:02:13.157 ++ VARIANT_ID=cloud 00:02:13.157 + uname -a 00:02:13.157 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:13.157 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:13.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:13.980 Hugepages 00:02:13.980 node hugesize free / total 00:02:13.980 node0 1048576kB 0 / 0 00:02:13.980 node0 2048kB 0 / 0 00:02:13.980 00:02:13.980 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:13.980 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:13.980 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:13.980 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:13.980 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:13.980 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:13.980 + rm -f /tmp/spdk-ld-path 00:02:13.980 + source autorun-spdk.conf 00:02:13.980 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.980 ++ SPDK_TEST_NVME=1 00:02:13.980 ++ SPDK_TEST_FTL=1 00:02:13.980 ++ SPDK_TEST_ISAL=1 00:02:13.980 ++ SPDK_RUN_ASAN=1 00:02:13.980 ++ SPDK_RUN_UBSAN=1 00:02:13.980 ++ SPDK_TEST_XNVME=1 00:02:13.980 ++ SPDK_TEST_NVME_FDP=1 00:02:13.980 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:13.980 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:13.980 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.980 ++ RUN_NIGHTLY=1 00:02:13.980 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:13.980 + [[ -n '' ]] 00:02:13.980 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:13.980 + for M in /var/spdk/build-*-manifest.txt 00:02:13.980 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:13.980 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.980 + for M in /var/spdk/build-*-manifest.txt 00:02:13.980 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:13.980 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:13.980 ++ uname 00:02:13.980 + [[ Linux == \L\i\n\u\x ]] 00:02:13.980 + sudo dmesg -T 00:02:14.239 + sudo dmesg --clear 00:02:14.239 + dmesg_pid=5947 00:02:14.239 + sudo dmesg -Tw 00:02:14.239 + [[ Fedora Linux == FreeBSD ]] 00:02:14.239 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.239 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.239 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.239 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.239 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.239 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.239 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.239 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.239 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.239 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.239 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.239 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.239 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.239 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.239 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.239 Test configuration: 00:02:14.239 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.239 SPDK_TEST_NVME=1 00:02:14.239 SPDK_TEST_FTL=1 00:02:14.239 SPDK_TEST_ISAL=1 00:02:14.239 SPDK_RUN_ASAN=1 00:02:14.239 SPDK_RUN_UBSAN=1 00:02:14.239 SPDK_TEST_XNVME=1 00:02:14.239 SPDK_TEST_NVME_FDP=1 00:02:14.239 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.239 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.239 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.239 RUN_NIGHTLY=1 18:09:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.239 18:09:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.239 18:09:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.239 18:09:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.239 18:09:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.239 18:09:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.239 18:09:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.239 18:09:00 -- paths/export.sh@5 -- $ export PATH 00:02:14.239 18:09:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.239 18:09:00 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.239 18:09:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:14.239 18:09:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720721340.XXXXXX 00:02:14.239 18:09:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720721340.LwzE64 00:02:14.239 18:09:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:14.239 18:09:00 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:02:14.239 18:09:00 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:14.239 18:09:00 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:14.239 18:09:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.239 18:09:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.239 18:09:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:14.239 18:09:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:14.239 18:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.239 18:09:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:14.239 18:09:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:14.239 18:09:00 -- pm/common@17 -- $ local monitor 00:02:14.239 18:09:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.239 18:09:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.239 18:09:00 -- pm/common@25 -- $ sleep 1 00:02:14.239 18:09:00 -- pm/common@21 -- $ date +%s 00:02:14.239 18:09:00 -- pm/common@21 -- $ date +%s 00:02:14.239 18:09:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720721340 00:02:14.239 18:09:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720721340 00:02:14.239 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720721340_collect-vmstat.pm.log 00:02:14.239 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720721340_collect-cpu-load.pm.log 00:02:15.174 18:09:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:15.174 18:09:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:15.174 18:09:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:15.174 18:09:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:15.174 18:09:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:15.174 Thu Jul 11 06:09:01 PM UTC 2024 00:02:15.174 18:09:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:15.174 v24.09-pre-202-g719d03c6a 00:02:15.174 18:09:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:15.174 18:09:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:15.174 18:09:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:15.174 18:09:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:15.432 18:09:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.432 ************************************ 00:02:15.432 START TEST asan 00:02:15.432 ************************************ 00:02:15.432 using asan 00:02:15.432 18:09:01 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:02:15.432 00:02:15.432 real 0m0.000s 00:02:15.432 user 0m0.000s 00:02:15.432 sys 0m0.000s 00:02:15.432 18:09:01 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:15.432 ************************************ 00:02:15.432 END TEST asan 00:02:15.432 ************************************ 00:02:15.432 18:09:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.432 18:09:01 -- common/autotest_common.sh@1142 -- $ return 0 00:02:15.432 18:09:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:15.432 18:09:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:15.432 18:09:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:15.432 18:09:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:15.432 18:09:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.432 ************************************ 00:02:15.432 START TEST ubsan 00:02:15.432 ************************************ 00:02:15.432 using ubsan 00:02:15.432 18:09:01 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:15.432 00:02:15.432 real 0m0.000s 00:02:15.432 user 0m0.000s 00:02:15.432 sys 0m0.000s 00:02:15.432 18:09:01 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:15.432 18:09:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.432 ************************************ 00:02:15.432 END TEST ubsan 00:02:15.432 ************************************ 00:02:15.432 18:09:01 -- common/autotest_common.sh@1142 -- $ return 0 00:02:15.432 18:09:01 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:15.432 18:09:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:15.432 18:09:01 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:15.432 18:09:01 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:15.432 18:09:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:15.432 18:09:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.432 ************************************ 00:02:15.432 START TEST build_native_dpdk 00:02:15.432 ************************************ 00:02:15.432 18:09:01 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:02:15.432 18:09:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:15.432 18:09:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:15.433 caf0f5d395 version: 22.11.4 00:02:15.433 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:15.433 dc9c799c7d vhost: fix missing spinlock unlock 00:02:15.433 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:15.433 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:15.433 18:09:01 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:15.433 patching file config/rte_config.h 00:02:15.433 Hunk #1 succeeded at 60 (offset 1 line). 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:15.433 18:09:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:20.702 The Meson build system 00:02:20.702 Version: 1.3.1 00:02:20.702 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:20.702 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:20.702 Build type: native build 00:02:20.702 Program cat found: YES (/usr/bin/cat) 00:02:20.702 Project name: DPDK 00:02:20.702 Project version: 22.11.4 00:02:20.702 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:20.702 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:20.702 Host machine cpu family: x86_64 00:02:20.702 Host machine cpu: x86_64 00:02:20.702 Message: ## Building in Developer Mode ## 00:02:20.702 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:20.702 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:20.702 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:20.702 Program objdump found: YES (/usr/bin/objdump) 00:02:20.702 Program python3 found: YES (/usr/bin/python3) 00:02:20.702 Program cat found: YES (/usr/bin/cat) 00:02:20.702 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:20.702 Checking for size of "void *" : 8 00:02:20.702 Checking for size of "void *" : 8 (cached) 00:02:20.702 Library m found: YES 00:02:20.702 Library numa found: YES 00:02:20.702 Has header "numaif.h" : YES 00:02:20.702 Library fdt found: NO 00:02:20.702 Library execinfo found: NO 00:02:20.702 Has header "execinfo.h" : YES 00:02:20.702 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:20.702 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:20.702 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:20.702 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:20.702 Run-time dependency openssl found: YES 3.0.9 00:02:20.702 Run-time dependency libpcap found: YES 1.10.4 00:02:20.702 Has header "pcap.h" with dependency libpcap: YES 00:02:20.702 Compiler for C supports arguments -Wcast-qual: YES 00:02:20.702 Compiler for C supports arguments -Wdeprecated: YES 00:02:20.702 Compiler for C supports arguments -Wformat: YES 00:02:20.702 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:20.702 Compiler for C supports arguments -Wformat-security: NO 00:02:20.702 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:20.702 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:20.702 Compiler for C supports arguments -Wnested-externs: YES 00:02:20.702 Compiler for C supports arguments -Wold-style-definition: YES 00:02:20.702 Compiler for C supports arguments -Wpointer-arith: YES 00:02:20.702 Compiler for C supports arguments -Wsign-compare: YES 00:02:20.702 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:20.702 Compiler for C supports arguments -Wundef: YES 00:02:20.702 Compiler for C supports arguments -Wwrite-strings: YES 00:02:20.702 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:20.702 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:20.702 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:20.702 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:20.702 Compiler for C supports arguments -mavx512f: YES 00:02:20.702 Checking if "AVX512 checking" compiles: YES 00:02:20.702 Fetching value of define "__SSE4_2__" : 1 00:02:20.702 Fetching value of define "__AES__" : 1 00:02:20.702 Fetching value of define "__AVX__" : 1 00:02:20.702 Fetching value of define "__AVX2__" : 1 00:02:20.702 Fetching value of define "__AVX512BW__" : (undefined) 00:02:20.702 Fetching value of define "__AVX512CD__" : (undefined) 00:02:20.702 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:20.702 Fetching value of define "__AVX512F__" : (undefined) 00:02:20.702 Fetching value of define "__AVX512VL__" : (undefined) 00:02:20.702 Fetching value of define "__PCLMUL__" : 1 00:02:20.702 Fetching value of define "__RDRND__" : 1 00:02:20.702 Fetching value of define "__RDSEED__" : 1 00:02:20.702 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:20.702 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:20.702 Message: lib/kvargs: Defining dependency "kvargs" 00:02:20.702 Message: lib/telemetry: Defining dependency "telemetry" 00:02:20.702 Checking for function "getentropy" : YES 00:02:20.702 Message: lib/eal: Defining dependency "eal" 00:02:20.702 Message: lib/ring: Defining dependency "ring" 00:02:20.702 Message: lib/rcu: Defining dependency "rcu" 00:02:20.702 Message: lib/mempool: Defining dependency "mempool" 00:02:20.702 Message: lib/mbuf: Defining dependency "mbuf" 00:02:20.702 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:20.702 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.702 Compiler for C supports arguments -mpclmul: YES 00:02:20.703 Compiler for C supports arguments -maes: YES 00:02:20.703 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.703 Compiler for C supports arguments -mavx512bw: YES 00:02:20.703 Compiler for C supports arguments -mavx512dq: YES 00:02:20.703 Compiler for C supports arguments -mavx512vl: YES 00:02:20.703 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:20.703 Compiler for C supports arguments -mavx2: YES 00:02:20.703 Compiler for C supports arguments -mavx: YES 00:02:20.703 Message: lib/net: Defining dependency "net" 00:02:20.703 Message: lib/meter: Defining dependency "meter" 00:02:20.703 Message: lib/ethdev: Defining dependency "ethdev" 00:02:20.703 Message: lib/pci: Defining dependency "pci" 00:02:20.703 Message: lib/cmdline: Defining dependency "cmdline" 00:02:20.703 Message: lib/metrics: Defining dependency "metrics" 00:02:20.703 Message: lib/hash: Defining dependency "hash" 00:02:20.703 Message: lib/timer: Defining dependency "timer" 00:02:20.703 Fetching value of define "__AVX2__" : 1 (cached) 00:02:20.703 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.703 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:20.703 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:20.703 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:20.703 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:20.703 Message: lib/acl: Defining dependency "acl" 00:02:20.703 Message: lib/bbdev: Defining dependency "bbdev" 00:02:20.703 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:20.703 Run-time dependency libelf found: YES 0.190 00:02:20.703 Message: lib/bpf: Defining dependency "bpf" 00:02:20.703 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:20.703 Message: lib/compressdev: Defining dependency "compressdev" 00:02:20.703 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:20.703 Message: lib/distributor: Defining dependency "distributor" 00:02:20.703 Message: lib/efd: Defining dependency "efd" 00:02:20.703 Message: lib/eventdev: Defining dependency "eventdev" 00:02:20.703 Message: lib/gpudev: Defining dependency "gpudev" 00:02:20.703 Message: lib/gro: Defining dependency "gro" 00:02:20.703 Message: lib/gso: Defining dependency "gso" 00:02:20.703 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:20.703 Message: lib/jobstats: Defining dependency "jobstats" 00:02:20.703 Message: lib/latencystats: Defining dependency "latencystats" 00:02:20.703 Message: lib/lpm: Defining dependency "lpm" 00:02:20.703 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.703 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:20.703 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:20.703 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:20.703 Message: lib/member: Defining dependency "member" 00:02:20.703 Message: lib/pcapng: Defining dependency "pcapng" 00:02:20.703 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:20.703 Message: lib/power: Defining dependency "power" 00:02:20.703 Message: lib/rawdev: Defining dependency "rawdev" 00:02:20.703 Message: lib/regexdev: Defining dependency "regexdev" 00:02:20.703 Message: lib/dmadev: Defining dependency "dmadev" 00:02:20.703 Message: lib/rib: Defining dependency "rib" 00:02:20.703 Message: lib/reorder: Defining dependency "reorder" 00:02:20.703 Message: lib/sched: Defining dependency "sched" 00:02:20.703 Message: lib/security: Defining dependency "security" 00:02:20.703 Message: lib/stack: Defining dependency "stack" 00:02:20.703 Has header "linux/userfaultfd.h" : YES 00:02:20.703 Message: lib/vhost: Defining dependency "vhost" 00:02:20.703 Message: lib/ipsec: Defining dependency "ipsec" 00:02:20.703 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.703 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:20.703 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:20.703 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:20.703 Message: lib/fib: Defining dependency "fib" 00:02:20.703 Message: lib/port: Defining dependency "port" 00:02:20.703 Message: lib/pdump: Defining dependency "pdump" 00:02:20.703 Message: lib/table: Defining dependency "table" 00:02:20.703 Message: lib/pipeline: Defining dependency "pipeline" 00:02:20.703 Message: lib/graph: Defining dependency "graph" 00:02:20.703 Message: lib/node: Defining dependency "node" 00:02:20.703 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:20.703 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.703 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.703 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:20.703 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:20.703 Compiler for C supports arguments -Wno-unused-value: YES 00:02:20.703 Compiler for C supports arguments -Wno-format: YES 00:02:20.703 Compiler for C supports arguments -Wno-format-security: YES 00:02:20.703 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:21.640 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.640 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:21.640 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:21.640 Fetching value of define "__AVX2__" : 1 (cached) 00:02:21.640 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.640 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.640 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:21.640 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:21.640 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:21.640 Program doxygen found: YES (/usr/bin/doxygen) 00:02:21.640 Configuring doxy-api.conf using configuration 00:02:21.640 Program sphinx-build found: NO 00:02:21.640 Configuring rte_build_config.h using configuration 00:02:21.640 Message: 00:02:21.640 ================= 00:02:21.640 Applications Enabled 00:02:21.640 ================= 00:02:21.640 00:02:21.640 apps: 00:02:21.640 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:21.640 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:21.640 test-security-perf, 00:02:21.640 00:02:21.640 Message: 00:02:21.640 ================= 00:02:21.640 Libraries Enabled 00:02:21.640 ================= 00:02:21.640 00:02:21.640 libs: 00:02:21.640 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:21.640 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:21.640 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:21.640 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:21.640 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:21.640 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:21.640 table, pipeline, graph, node, 00:02:21.640 00:02:21.640 Message: 00:02:21.640 =============== 00:02:21.640 Drivers Enabled 00:02:21.640 =============== 00:02:21.640 00:02:21.640 common: 00:02:21.640 00:02:21.640 bus: 00:02:21.640 pci, vdev, 00:02:21.640 mempool: 00:02:21.640 ring, 00:02:21.640 dma: 00:02:21.640 00:02:21.640 net: 00:02:21.640 i40e, 00:02:21.640 raw: 00:02:21.640 00:02:21.640 crypto: 00:02:21.640 00:02:21.640 compress: 00:02:21.640 00:02:21.640 regex: 00:02:21.640 00:02:21.640 vdpa: 00:02:21.640 00:02:21.640 event: 00:02:21.640 00:02:21.640 baseband: 00:02:21.640 00:02:21.640 gpu: 00:02:21.640 00:02:21.640 00:02:21.640 Message: 00:02:21.640 ================= 00:02:21.640 Content Skipped 00:02:21.640 ================= 00:02:21.640 00:02:21.640 apps: 00:02:21.640 00:02:21.640 libs: 00:02:21.640 kni: explicitly disabled via build config (deprecated lib) 00:02:21.640 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:21.640 00:02:21.640 drivers: 00:02:21.640 common/cpt: not in enabled drivers build config 00:02:21.640 common/dpaax: not in enabled drivers build config 00:02:21.640 common/iavf: not in enabled drivers build config 00:02:21.640 common/idpf: not in enabled drivers build config 00:02:21.640 common/mvep: not in enabled drivers build config 00:02:21.640 common/octeontx: not in enabled drivers build config 00:02:21.640 bus/auxiliary: not in enabled drivers build config 00:02:21.640 bus/dpaa: not in enabled drivers build config 00:02:21.640 bus/fslmc: not in enabled drivers build config 00:02:21.640 bus/ifpga: not in enabled drivers build config 00:02:21.640 bus/vmbus: not in enabled drivers build config 00:02:21.640 common/cnxk: not in enabled drivers build config 00:02:21.640 common/mlx5: not in enabled drivers build config 00:02:21.640 common/qat: not in enabled drivers build config 00:02:21.640 common/sfc_efx: not in enabled drivers build config 00:02:21.640 mempool/bucket: not in enabled drivers build config 00:02:21.640 mempool/cnxk: not in enabled drivers build config 00:02:21.640 mempool/dpaa: not in enabled drivers build config 00:02:21.640 mempool/dpaa2: not in enabled drivers build config 00:02:21.640 mempool/octeontx: not in enabled drivers build config 00:02:21.640 mempool/stack: not in enabled drivers build config 00:02:21.640 dma/cnxk: not in enabled drivers build config 00:02:21.640 dma/dpaa: not in enabled drivers build config 00:02:21.640 dma/dpaa2: not in enabled drivers build config 00:02:21.640 dma/hisilicon: not in enabled drivers build config 00:02:21.640 dma/idxd: not in enabled drivers build config 00:02:21.640 dma/ioat: not in enabled drivers build config 00:02:21.640 dma/skeleton: not in enabled drivers build config 00:02:21.640 net/af_packet: not in enabled drivers build config 00:02:21.640 net/af_xdp: not in enabled drivers build config 00:02:21.640 net/ark: not in enabled drivers build config 00:02:21.640 net/atlantic: not in enabled drivers build config 00:02:21.640 net/avp: not in enabled drivers build config 00:02:21.640 net/axgbe: not in enabled drivers build config 00:02:21.640 net/bnx2x: not in enabled drivers build config 00:02:21.640 net/bnxt: not in enabled drivers build config 00:02:21.640 net/bonding: not in enabled drivers build config 00:02:21.640 net/cnxk: not in enabled drivers build config 00:02:21.640 net/cxgbe: not in enabled drivers build config 00:02:21.640 net/dpaa: not in enabled drivers build config 00:02:21.640 net/dpaa2: not in enabled drivers build config 00:02:21.640 net/e1000: not in enabled drivers build config 00:02:21.640 net/ena: not in enabled drivers build config 00:02:21.640 net/enetc: not in enabled drivers build config 00:02:21.640 net/enetfec: not in enabled drivers build config 00:02:21.640 net/enic: not in enabled drivers build config 00:02:21.640 net/failsafe: not in enabled drivers build config 00:02:21.640 net/fm10k: not in enabled drivers build config 00:02:21.640 net/gve: not in enabled drivers build config 00:02:21.640 net/hinic: not in enabled drivers build config 00:02:21.640 net/hns3: not in enabled drivers build config 00:02:21.640 net/iavf: not in enabled drivers build config 00:02:21.640 net/ice: not in enabled drivers build config 00:02:21.640 net/idpf: not in enabled drivers build config 00:02:21.640 net/igc: not in enabled drivers build config 00:02:21.640 net/ionic: not in enabled drivers build config 00:02:21.640 net/ipn3ke: not in enabled drivers build config 00:02:21.640 net/ixgbe: not in enabled drivers build config 00:02:21.640 net/kni: not in enabled drivers build config 00:02:21.640 net/liquidio: not in enabled drivers build config 00:02:21.640 net/mana: not in enabled drivers build config 00:02:21.640 net/memif: not in enabled drivers build config 00:02:21.640 net/mlx4: not in enabled drivers build config 00:02:21.640 net/mlx5: not in enabled drivers build config 00:02:21.641 net/mvneta: not in enabled drivers build config 00:02:21.641 net/mvpp2: not in enabled drivers build config 00:02:21.641 net/netvsc: not in enabled drivers build config 00:02:21.641 net/nfb: not in enabled drivers build config 00:02:21.641 net/nfp: not in enabled drivers build config 00:02:21.641 net/ngbe: not in enabled drivers build config 00:02:21.641 net/null: not in enabled drivers build config 00:02:21.641 net/octeontx: not in enabled drivers build config 00:02:21.641 net/octeon_ep: not in enabled drivers build config 00:02:21.641 net/pcap: not in enabled drivers build config 00:02:21.641 net/pfe: not in enabled drivers build config 00:02:21.641 net/qede: not in enabled drivers build config 00:02:21.641 net/ring: not in enabled drivers build config 00:02:21.641 net/sfc: not in enabled drivers build config 00:02:21.641 net/softnic: not in enabled drivers build config 00:02:21.641 net/tap: not in enabled drivers build config 00:02:21.641 net/thunderx: not in enabled drivers build config 00:02:21.641 net/txgbe: not in enabled drivers build config 00:02:21.641 net/vdev_netvsc: not in enabled drivers build config 00:02:21.641 net/vhost: not in enabled drivers build config 00:02:21.641 net/virtio: not in enabled drivers build config 00:02:21.641 net/vmxnet3: not in enabled drivers build config 00:02:21.641 raw/cnxk_bphy: not in enabled drivers build config 00:02:21.641 raw/cnxk_gpio: not in enabled drivers build config 00:02:21.641 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:21.641 raw/ifpga: not in enabled drivers build config 00:02:21.641 raw/ntb: not in enabled drivers build config 00:02:21.641 raw/skeleton: not in enabled drivers build config 00:02:21.641 crypto/armv8: not in enabled drivers build config 00:02:21.641 crypto/bcmfs: not in enabled drivers build config 00:02:21.641 crypto/caam_jr: not in enabled drivers build config 00:02:21.641 crypto/ccp: not in enabled drivers build config 00:02:21.641 crypto/cnxk: not in enabled drivers build config 00:02:21.641 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.641 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.641 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.641 crypto/mlx5: not in enabled drivers build config 00:02:21.641 crypto/mvsam: not in enabled drivers build config 00:02:21.641 crypto/nitrox: not in enabled drivers build config 00:02:21.641 crypto/null: not in enabled drivers build config 00:02:21.641 crypto/octeontx: not in enabled drivers build config 00:02:21.641 crypto/openssl: not in enabled drivers build config 00:02:21.641 crypto/scheduler: not in enabled drivers build config 00:02:21.641 crypto/uadk: not in enabled drivers build config 00:02:21.641 crypto/virtio: not in enabled drivers build config 00:02:21.641 compress/isal: not in enabled drivers build config 00:02:21.641 compress/mlx5: not in enabled drivers build config 00:02:21.641 compress/octeontx: not in enabled drivers build config 00:02:21.641 compress/zlib: not in enabled drivers build config 00:02:21.641 regex/mlx5: not in enabled drivers build config 00:02:21.641 regex/cn9k: not in enabled drivers build config 00:02:21.641 vdpa/ifc: not in enabled drivers build config 00:02:21.641 vdpa/mlx5: not in enabled drivers build config 00:02:21.641 vdpa/sfc: not in enabled drivers build config 00:02:21.641 event/cnxk: not in enabled drivers build config 00:02:21.641 event/dlb2: not in enabled drivers build config 00:02:21.641 event/dpaa: not in enabled drivers build config 00:02:21.641 event/dpaa2: not in enabled drivers build config 00:02:21.641 event/dsw: not in enabled drivers build config 00:02:21.641 event/opdl: not in enabled drivers build config 00:02:21.641 event/skeleton: not in enabled drivers build config 00:02:21.641 event/sw: not in enabled drivers build config 00:02:21.641 event/octeontx: not in enabled drivers build config 00:02:21.641 baseband/acc: not in enabled drivers build config 00:02:21.641 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:21.641 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:21.641 baseband/la12xx: not in enabled drivers build config 00:02:21.641 baseband/null: not in enabled drivers build config 00:02:21.641 baseband/turbo_sw: not in enabled drivers build config 00:02:21.641 gpu/cuda: not in enabled drivers build config 00:02:21.641 00:02:21.641 00:02:21.641 Build targets in project: 314 00:02:21.641 00:02:21.641 DPDK 22.11.4 00:02:21.641 00:02:21.641 User defined options 00:02:21.641 libdir : lib 00:02:21.641 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:21.641 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:21.641 c_link_args : 00:02:21.641 enable_docs : false 00:02:21.641 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:21.641 enable_kmods : false 00:02:21.641 machine : native 00:02:21.641 tests : false 00:02:21.641 00:02:21.641 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.641 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:21.899 18:09:08 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:21.899 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:21.899 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:21.899 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:21.899 [3/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:21.899 [4/743] Generating lib/rte_telemetry_def with a custom command 00:02:21.899 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.157 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.157 [7/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:22.157 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.157 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.157 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.157 [11/743] Linking static target lib/librte_kvargs.a 00:02:22.157 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.157 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.157 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.157 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.415 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.415 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.415 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.415 [19/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.415 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.415 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:22.415 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.415 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:22.415 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.415 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.415 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.674 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.674 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.674 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.674 [30/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.674 [31/743] Linking static target lib/librte_telemetry.a 00:02:22.674 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.674 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.674 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.674 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.674 [36/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:22.674 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.674 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.674 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.674 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.932 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.932 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:22.932 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.932 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:22.932 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.190 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.190 [47/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:23.190 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.190 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.190 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.190 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.190 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.190 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.190 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.190 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.190 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.190 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.190 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.190 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.448 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.448 [61/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.448 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.448 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.448 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.448 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:23.448 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.448 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.448 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.448 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.448 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.706 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.706 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:23.706 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.706 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.706 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.706 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.706 [77/743] Generating lib/rte_eal_def with a custom command 00:02:23.706 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:23.706 [79/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.706 [80/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:23.706 [81/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.706 [82/743] Generating lib/rte_ring_def with a custom command 00:02:23.706 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:23.706 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:23.706 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:23.706 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.964 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.964 [88/743] Linking static target lib/librte_ring.a 00:02:23.964 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:23.964 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:23.964 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:23.964 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:23.964 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.222 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.222 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.222 [96/743] Linking static target lib/librte_eal.a 00:02:24.480 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.480 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:24.480 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:24.480 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.480 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.480 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.480 [103/743] Linking static target lib/librte_rcu.a 00:02:24.480 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.480 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.738 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.738 [107/743] Linking static target lib/librte_mempool.a 00:02:24.996 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.996 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.996 [110/743] Generating lib/rte_net_def with a custom command 00:02:24.996 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:24.996 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.996 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.996 [114/743] Generating lib/rte_meter_def with a custom command 00:02:24.996 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:24.996 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.996 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.996 [118/743] Linking static target lib/librte_meter.a 00:02:25.255 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.255 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.255 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.255 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.519 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.519 [124/743] Linking static target lib/librte_mbuf.a 00:02:25.519 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.519 [126/743] Linking static target lib/librte_net.a 00:02:25.520 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.796 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.796 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.796 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.796 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.053 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.053 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.053 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.311 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.569 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.569 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:26.569 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:26.827 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.827 [140/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.827 [141/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.827 [142/743] Linking static target lib/librte_pci.a 00:02:26.827 [143/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.828 [144/743] Generating lib/rte_pci_def with a custom command 00:02:26.828 [145/743] Generating lib/rte_pci_mingw with a custom command 00:02:26.828 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.828 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.828 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.828 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.828 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.086 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.086 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.086 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.086 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.086 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.086 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.086 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:27.086 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:27.086 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.086 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.086 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:27.086 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:27.346 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.346 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.346 [165/743] Generating lib/rte_hash_def with a custom command 00:02:27.346 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.346 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:27.346 [168/743] Generating lib/rte_timer_def with a custom command 00:02:27.346 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.346 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:27.346 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.346 [172/743] Linking static target lib/librte_cmdline.a 00:02:27.605 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.872 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:27.872 [175/743] Linking static target lib/librte_metrics.a 00:02:27.872 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.872 [177/743] Linking static target lib/librte_timer.a 00:02:28.132 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.132 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.132 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.132 [181/743] Linking static target lib/librte_ethdev.a 00:02:28.389 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.389 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.389 [184/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.956 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:28.956 [186/743] Generating lib/rte_acl_def with a custom command 00:02:28.956 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:28.956 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:28.956 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:28.956 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:28.956 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:28.956 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:29.214 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:29.473 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.731 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:29.731 [196/743] Linking static target lib/librte_bitratestats.a 00:02:29.731 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:29.731 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.990 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:29.990 [200/743] Linking static target lib/librte_bbdev.a 00:02:29.990 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:30.248 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.248 [203/743] Linking static target lib/librte_hash.a 00:02:30.248 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:30.248 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:02:30.507 [206/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.507 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.507 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:30.507 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:30.764 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.023 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:31.023 [212/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:31.023 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:31.023 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:31.023 [215/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:31.023 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:31.282 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:31.282 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:31.282 [219/743] Linking static target lib/librte_acl.a 00:02:31.282 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:31.282 [221/743] Linking static target lib/librte_cfgfile.a 00:02:31.282 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:31.282 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:31.541 [224/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.541 [225/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:31.541 [226/743] Linking target lib/librte_eal.so.23.0 00:02:31.541 [227/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.541 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.541 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:31.541 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:31.541 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:31.541 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:31.541 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:31.799 [234/743] Linking target lib/librte_ring.so.23.0 00:02:31.799 [235/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:31.799 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:31.799 [237/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:31.799 [238/743] Linking target lib/librte_rcu.so.23.0 00:02:31.799 [239/743] Linking target lib/librte_mempool.so.23.0 00:02:31.799 [240/743] Linking target lib/librte_meter.so.23.0 00:02:32.057 [241/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.057 [242/743] Linking target lib/librte_pci.so.23.0 00:02:32.057 [243/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:32.057 [244/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:32.057 [245/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:32.057 [246/743] Linking target lib/librte_timer.so.23.0 00:02:32.057 [247/743] Linking target lib/librte_mbuf.so.23.0 00:02:32.057 [248/743] Linking target lib/librte_acl.so.23.0 00:02:32.057 [249/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:32.057 [250/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.057 [251/743] Linking static target lib/librte_bpf.a 00:02:32.057 [252/743] Linking target lib/librte_cfgfile.so.23.0 00:02:32.057 [253/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:32.057 [254/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:32.057 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:32.057 [256/743] Linking static target lib/librte_compressdev.a 00:02:32.316 [257/743] Linking target lib/librte_net.so.23.0 00:02:32.316 [258/743] Linking target lib/librte_bbdev.so.23.0 00:02:32.316 [259/743] Generating lib/rte_distributor_def with a custom command 00:02:32.316 [260/743] Generating lib/rte_distributor_mingw with a custom command 00:02:32.316 [261/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:32.316 [262/743] Linking target lib/librte_cmdline.so.23.0 00:02:32.316 [263/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.316 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:32.316 [265/743] Generating lib/rte_efd_def with a custom command 00:02:32.316 [266/743] Linking target lib/librte_hash.so.23.0 00:02:32.316 [267/743] Generating lib/rte_efd_mingw with a custom command 00:02:32.316 [268/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.574 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:32.833 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:32.833 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:32.833 [272/743] Linking static target lib/librte_distributor.a 00:02:32.833 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.833 [274/743] Linking target lib/librte_ethdev.so.23.0 00:02:33.091 [275/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.091 [276/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.091 [277/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:33.091 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:33.091 [279/743] Linking target lib/librte_distributor.so.23.0 00:02:33.091 [280/743] Linking target lib/librte_compressdev.so.23.0 00:02:33.091 [281/743] Linking target lib/librte_metrics.so.23.0 00:02:33.091 [282/743] Linking target lib/librte_bpf.so.23.0 00:02:33.091 [283/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:33.091 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:33.091 [285/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:33.350 [286/743] Generating lib/rte_eventdev_def with a custom command 00:02:33.350 [287/743] Linking target lib/librte_bitratestats.so.23.0 00:02:33.350 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:33.350 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:33.350 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:33.607 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:33.864 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:33.864 [293/743] Linking static target lib/librte_efd.a 00:02:33.864 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:34.123 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:34.123 [296/743] Linking static target lib/librte_cryptodev.a 00:02:34.123 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.123 [298/743] Linking target lib/librte_efd.so.23.0 00:02:34.123 [299/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:34.123 [300/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:34.123 [301/743] Linking static target lib/librte_gpudev.a 00:02:34.123 [302/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:34.123 [303/743] Generating lib/rte_gro_def with a custom command 00:02:34.123 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:34.381 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:34.381 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:34.639 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:34.898 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:34.898 [309/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.898 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:34.898 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:35.157 [312/743] Linking static target lib/librte_gro.a 00:02:35.157 [313/743] Linking target lib/librte_gpudev.so.23.0 00:02:35.157 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:35.157 [315/743] Generating lib/rte_gso_def with a custom command 00:02:35.157 [316/743] Generating lib/rte_gso_mingw with a custom command 00:02:35.157 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:35.157 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:35.157 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.157 [320/743] Linking target lib/librte_gro.so.23.0 00:02:35.416 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:35.416 [322/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:35.416 [323/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:35.416 [324/743] Linking static target lib/librte_eventdev.a 00:02:35.416 [325/743] Generating lib/rte_ip_frag_def with a custom command 00:02:35.416 [326/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:35.675 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:35.675 [328/743] Linking static target lib/librte_jobstats.a 00:02:35.675 [329/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:35.675 [330/743] Linking static target lib/librte_gso.a 00:02:35.675 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:35.675 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:35.675 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:35.933 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.933 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:35.933 [336/743] Linking target lib/librte_gso.so.23.0 00:02:35.933 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:35.933 [338/743] Generating lib/rte_latencystats_def with a custom command 00:02:35.933 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:35.933 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:35.933 [341/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.933 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:02:35.933 [343/743] Linking target lib/librte_jobstats.so.23.0 00:02:36.191 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:36.191 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:36.191 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:36.191 [347/743] Linking static target lib/librte_ip_frag.a 00:02:36.191 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.191 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:36.450 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:36.450 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.450 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:02:36.709 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:36.709 [354/743] Linking static target lib/librte_latencystats.a 00:02:36.709 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:36.709 [356/743] Generating lib/rte_member_def with a custom command 00:02:36.709 [357/743] Generating lib/rte_member_mingw with a custom command 00:02:36.709 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:36.709 [359/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:36.709 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:36.709 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:36.709 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:36.709 [363/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.709 [364/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:36.967 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:36.967 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.967 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.967 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:37.224 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.224 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:37.224 [371/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.224 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:37.224 [373/743] Linking static target lib/librte_lpm.a 00:02:37.224 [374/743] Linking target lib/librte_eventdev.so.23.0 00:02:37.224 [375/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:37.483 [376/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:37.483 [377/743] Generating lib/rte_power_def with a custom command 00:02:37.483 [378/743] Generating lib/rte_power_mingw with a custom command 00:02:37.483 [379/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:37.483 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:37.483 [381/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:37.483 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:37.483 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:37.483 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:37.743 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.743 [386/743] Generating lib/rte_dmadev_def with a custom command 00:02:37.743 [387/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.743 [388/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:37.743 [389/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:37.743 [390/743] Linking target lib/librte_lpm.so.23.0 00:02:37.743 [391/743] Linking static target lib/librte_pcapng.a 00:02:37.743 [392/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:37.743 [393/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:37.743 [394/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:37.743 [395/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:37.743 [396/743] Linking static target lib/librte_rawdev.a 00:02:37.743 [397/743] Generating lib/rte_rib_def with a custom command 00:02:37.743 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:38.002 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:38.002 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:38.002 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.002 [402/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.002 [403/743] Linking static target lib/librte_dmadev.a 00:02:38.002 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:38.260 [405/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:38.260 [406/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:38.260 [407/743] Linking static target lib/librte_power.a 00:02:38.260 [408/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:38.260 [409/743] Linking static target lib/librte_regexdev.a 00:02:38.260 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:38.260 [411/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.260 [412/743] Linking target lib/librte_rawdev.so.23.0 00:02:38.518 [413/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:38.518 [414/743] Linking static target lib/librte_member.a 00:02:38.518 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:38.518 [416/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:38.518 [417/743] Generating lib/rte_sched_def with a custom command 00:02:38.518 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:38.518 [419/743] Generating lib/rte_security_def with a custom command 00:02:38.518 [420/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:38.518 [421/743] Generating lib/rte_security_mingw with a custom command 00:02:38.518 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.518 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:38.780 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:38.780 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:38.780 [426/743] Generating lib/rte_stack_def with a custom command 00:02:38.780 [427/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.780 [428/743] Linking static target lib/librte_reorder.a 00:02:38.780 [429/743] Generating lib/rte_stack_mingw with a custom command 00:02:38.780 [430/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:38.780 [431/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:38.780 [432/743] Linking static target lib/librte_stack.a 00:02:38.780 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.780 [434/743] Linking target lib/librte_member.so.23.0 00:02:39.039 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.039 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.039 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.039 [438/743] Linking target lib/librte_stack.so.23.0 00:02:39.039 [439/743] Linking target lib/librte_reorder.so.23.0 00:02:39.039 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:39.039 [441/743] Linking static target lib/librte_rib.a 00:02:39.039 [442/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.039 [443/743] Linking target lib/librte_regexdev.so.23.0 00:02:39.298 [444/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.298 [445/743] Linking target lib/librte_power.so.23.0 00:02:39.298 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.298 [447/743] Linking static target lib/librte_security.a 00:02:39.557 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.557 [449/743] Linking target lib/librte_rib.so.23.0 00:02:39.557 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.557 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:39.557 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:39.557 [453/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:39.557 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.815 [455/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.815 [456/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.815 [457/743] Linking target lib/librte_security.so.23.0 00:02:40.073 [458/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:40.073 [459/743] Linking static target lib/librte_sched.a 00:02:40.073 [460/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:40.331 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.331 [462/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:40.331 [463/743] Linking target lib/librte_sched.so.23.0 00:02:40.589 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.589 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:40.589 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:40.589 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:40.589 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:40.589 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.589 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:40.847 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:41.106 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:41.106 [473/743] Generating lib/rte_fib_def with a custom command 00:02:41.106 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:41.106 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:41.106 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:41.106 [477/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:41.106 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:41.365 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:41.623 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:41.623 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:41.623 [482/743] Linking static target lib/librte_ipsec.a 00:02:41.881 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.881 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:41.881 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:42.139 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:42.139 [487/743] Linking static target lib/librte_fib.a 00:02:42.139 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:42.139 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:42.139 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:42.139 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:42.398 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.398 [493/743] Linking target lib/librte_fib.so.23.0 00:02:42.657 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:43.224 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:43.224 [496/743] Generating lib/rte_port_def with a custom command 00:02:43.224 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:43.224 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:43.224 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:43.224 [500/743] Generating lib/rte_pdump_def with a custom command 00:02:43.224 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:43.224 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:02:43.224 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:43.507 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:43.507 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:43.507 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:43.507 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:43.773 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:43.773 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:43.773 [510/743] Linking static target lib/librte_port.a 00:02:44.031 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:44.031 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:44.290 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:44.290 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.290 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:44.290 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:44.290 [517/743] Linking target lib/librte_port.so.23.0 00:02:44.553 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:44.553 [519/743] Linking static target lib/librte_pdump.a 00:02:44.553 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:44.812 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.812 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:45.070 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:45.070 [524/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:45.071 [525/743] Generating lib/rte_table_def with a custom command 00:02:45.071 [526/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.071 [527/743] Generating lib/rte_table_mingw with a custom command 00:02:45.071 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:45.071 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:45.329 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:45.329 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:45.329 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:45.329 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:45.588 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:45.588 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:45.588 [536/743] Linking static target lib/librte_table.a 00:02:45.588 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:46.155 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:46.155 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:46.155 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.413 [541/743] Linking target lib/librte_table.so.23.0 00:02:46.413 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:46.413 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:46.413 [544/743] Generating lib/rte_graph_def with a custom command 00:02:46.413 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:46.413 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:46.672 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:46.672 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:46.930 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:46.930 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:46.930 [551/743] Linking static target lib/librte_graph.a 00:02:47.188 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:47.188 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:47.188 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:47.447 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:47.447 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:47.447 [557/743] Generating lib/rte_node_def with a custom command 00:02:47.705 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:47.705 [559/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:47.705 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:47.705 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.964 [562/743] Linking target lib/librte_graph.so.23.0 00:02:47.964 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:47.964 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:47.964 [565/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:47.964 [566/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:47.964 [567/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:47.964 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:47.964 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:47.964 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:47.964 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:48.223 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:48.223 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:48.224 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:48.224 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:48.224 [576/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.224 [577/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.224 [578/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:48.224 [579/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:48.224 [580/743] Linking static target lib/librte_node.a 00:02:48.224 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.483 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.483 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.483 [584/743] Linking static target drivers/librte_bus_vdev.a 00:02:48.483 [585/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.483 [586/743] Linking target lib/librte_node.so.23.0 00:02:48.483 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.742 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.742 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:48.742 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.742 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:49.001 [592/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.001 [593/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:49.001 [594/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.001 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.001 [596/743] Linking static target drivers/librte_bus_pci.a 00:02:49.259 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:49.259 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:49.259 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.259 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:49.259 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:49.518 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:49.518 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.518 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.518 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:49.518 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.775 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.775 [608/743] Linking static target drivers/librte_mempool_ring.a 00:02:49.775 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.775 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:50.342 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:50.342 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:50.600 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:50.600 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:50.858 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:51.116 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:51.116 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:51.682 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:51.682 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:51.940 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:51.940 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:51.940 [622/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:51.940 [623/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:51.940 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:52.197 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:53.132 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:53.390 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:53.390 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:53.390 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:53.390 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:53.648 [631/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:53.649 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:53.649 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:53.649 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:53.907 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:53.907 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:54.474 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:54.474 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:54.474 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:54.732 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:54.732 [641/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.732 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:54.732 [643/743] Linking static target lib/librte_vhost.a 00:02:54.732 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:54.732 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.990 [646/743] Linking static target drivers/librte_net_i40e.a 00:02:54.990 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:54.990 [648/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:54.990 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:55.248 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:55.506 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:55.506 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.506 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:55.765 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:55.765 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:55.765 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:56.023 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:56.023 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.023 [659/743] Linking target lib/librte_vhost.so.23.0 00:02:56.590 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:56.590 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:56.590 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:56.590 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:56.590 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:56.590 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:56.590 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:56.847 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:56.847 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:56.847 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:57.103 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:57.359 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:57.616 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:57.616 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:58.179 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:58.179 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:58.179 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:58.436 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:58.436 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:58.693 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:58.693 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:58.951 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:58.951 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:59.209 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:59.209 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:59.209 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:59.467 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:59.467 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:59.467 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:59.725 [689/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:59.725 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:59.725 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:59.725 [692/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:59.725 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:59.983 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:00.548 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:00.548 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:00.548 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:00.806 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:00.806 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:01.372 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:01.372 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:01.372 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:01.630 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:01.630 [704/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:01.630 [705/743] Linking static target lib/librte_pipeline.a 00:03:01.888 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:01.888 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:02.146 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:02.146 [709/743] Linking target app/dpdk-dumpcap 00:03:02.405 [710/743] Linking target app/dpdk-pdump 00:03:02.405 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:02.405 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:02.663 [713/743] Linking target app/dpdk-proc-info 00:03:02.663 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:02.663 [715/743] Linking target app/dpdk-test-bbdev 00:03:02.663 [716/743] Linking target app/dpdk-test-acl 00:03:02.941 [717/743] Linking target app/dpdk-test-cmdline 00:03:02.941 [718/743] Linking target app/dpdk-test-compress-perf 00:03:03.224 [719/743] Linking target app/dpdk-test-crypto-perf 00:03:03.224 [720/743] Linking target app/dpdk-test-eventdev 00:03:03.224 [721/743] Linking target app/dpdk-test-fib 00:03:03.224 [722/743] Linking target app/dpdk-test-flow-perf 00:03:03.224 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:03.224 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:03.224 [725/743] Linking target app/dpdk-test-gpudev 00:03:03.483 [726/743] Linking target app/dpdk-test-pipeline 00:03:04.048 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:04.048 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:04.048 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:04.306 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:04.306 [731/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:04.306 [732/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:04.306 [733/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.564 [734/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:04.564 [735/743] Linking target lib/librte_pipeline.so.23.0 00:03:04.564 [736/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:04.822 [737/743] Linking target app/dpdk-test-sad 00:03:04.822 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:04.822 [739/743] Linking target app/dpdk-test-regex 00:03:05.390 [740/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:05.390 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:05.648 [742/743] Linking target app/dpdk-test-security-perf 00:03:05.648 [743/743] Linking target app/dpdk-testpmd 00:03:05.648 18:09:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:05.648 18:09:52 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:05.648 18:09:52 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:05.906 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:05.906 [0/1] Installing files. 00:03:06.168 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.168 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.169 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.170 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.171 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:06.172 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:06.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.173 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.173 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.173 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:06.433 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:06.433 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:06.433 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.433 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:06.433 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.433 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.434 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.434 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.435 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.695 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:06.696 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:06.696 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:06.696 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:06.696 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:06.696 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:06.696 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:06.696 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:06.696 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:06.696 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:06.696 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:06.696 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:06.696 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:06.696 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:06.696 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:06.696 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:06.696 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:06.696 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:06.696 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:06.696 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:06.696 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:06.696 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:06.696 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:06.696 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:06.696 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:06.696 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:06.696 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:06.696 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:06.696 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:06.696 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:06.696 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:06.696 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:06.696 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:06.696 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:06.696 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:06.696 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:06.696 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:06.696 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:06.696 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:06.696 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:06.696 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:06.696 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:06.696 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:06.696 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:06.696 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:06.696 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:06.696 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:06.696 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:06.696 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:06.696 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:06.696 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:06.697 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:06.697 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:06.697 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:06.697 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:06.697 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:06.697 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:06.697 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:06.697 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:06.697 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:06.697 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:06.697 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:06.697 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:06.697 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:06.697 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:06.697 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:06.697 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:06.697 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:06.697 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:06.697 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:06.697 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:06.697 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:06.697 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:06.697 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:06.697 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:06.697 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:06.697 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:06.697 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:06.697 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:06.697 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:06.697 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:06.697 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:06.697 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:06.697 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:06.697 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:06.697 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:06.697 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:06.697 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:06.697 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:06.697 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:06.697 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:06.697 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:06.697 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:06.697 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:06.697 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:06.697 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:06.697 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:06.697 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:06.697 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:06.697 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:06.697 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:06.697 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:06.697 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:06.697 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:06.697 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:06.697 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:06.697 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:06.697 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:06.697 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:06.697 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:06.697 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:06.697 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:06.697 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:06.697 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:06.697 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:06.697 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:06.697 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:06.697 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:06.697 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:06.697 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:06.697 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:06.697 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:06.697 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:06.697 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:06.697 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:06.697 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:06.697 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:06.697 18:09:52 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:06.697 18:09:52 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:06.697 00:03:06.697 real 0m51.232s 00:03:06.697 user 6m8.912s 00:03:06.697 sys 0m54.942s 00:03:06.697 18:09:52 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:06.697 18:09:52 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:06.697 ************************************ 00:03:06.697 END TEST build_native_dpdk 00:03:06.697 ************************************ 00:03:06.697 18:09:52 -- common/autotest_common.sh@1142 -- $ return 0 00:03:06.697 18:09:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:06.697 18:09:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:06.697 18:09:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:06.697 18:09:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:06.697 18:09:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:06.697 18:09:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:06.697 18:09:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:06.697 18:09:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:06.697 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:06.955 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:06.955 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:06.955 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:07.213 Using 'verbs' RDMA provider 00:03:20.791 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:35.670 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:35.670 Creating mk/config.mk...done. 00:03:35.670 Creating mk/cc.flags.mk...done. 00:03:35.670 Type 'make' to build. 00:03:35.670 18:10:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:35.670 18:10:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:35.670 18:10:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:35.670 18:10:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:35.670 ************************************ 00:03:35.670 START TEST make 00:03:35.670 ************************************ 00:03:35.670 18:10:20 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:35.670 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:35.670 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:35.670 meson setup builddir \ 00:03:35.670 -Dwith-libaio=enabled \ 00:03:35.670 -Dwith-liburing=enabled \ 00:03:35.670 -Dwith-libvfn=disabled \ 00:03:35.670 -Dwith-spdk=false && \ 00:03:35.670 meson compile -C builddir && \ 00:03:35.670 cd -) 00:03:35.670 make[1]: Nothing to be done for 'all'. 00:03:37.044 The Meson build system 00:03:37.044 Version: 1.3.1 00:03:37.044 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:37.044 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:37.044 Build type: native build 00:03:37.044 Project name: xnvme 00:03:37.044 Project version: 0.7.3 00:03:37.044 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:37.044 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:37.044 Host machine cpu family: x86_64 00:03:37.044 Host machine cpu: x86_64 00:03:37.044 Message: host_machine.system: linux 00:03:37.044 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:37.044 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:37.044 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:37.044 Run-time dependency threads found: YES 00:03:37.044 Has header "setupapi.h" : NO 00:03:37.044 Has header "linux/blkzoned.h" : YES 00:03:37.044 Has header "linux/blkzoned.h" : YES (cached) 00:03:37.044 Has header "libaio.h" : YES 00:03:37.044 Library aio found: YES 00:03:37.044 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:37.044 Run-time dependency liburing found: YES 2.2 00:03:37.044 Dependency libvfn skipped: feature with-libvfn disabled 00:03:37.044 Run-time dependency appleframeworks found: NO (tried framework) 00:03:37.044 Run-time dependency appleframeworks found: NO (tried framework) 00:03:37.044 Configuring xnvme_config.h using configuration 00:03:37.044 Configuring xnvme.spec using configuration 00:03:37.044 Run-time dependency bash-completion found: YES 2.11 00:03:37.044 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:37.044 Program cp found: YES (/usr/bin/cp) 00:03:37.044 Has header "winsock2.h" : NO 00:03:37.044 Has header "dbghelp.h" : NO 00:03:37.044 Library rpcrt4 found: NO 00:03:37.044 Library rt found: YES 00:03:37.044 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:37.044 Found CMake: /usr/bin/cmake (3.27.7) 00:03:37.044 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:37.044 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:37.044 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:37.044 Build targets in project: 32 00:03:37.044 00:03:37.044 xnvme 0.7.3 00:03:37.044 00:03:37.044 User defined options 00:03:37.045 with-libaio : enabled 00:03:37.045 with-liburing: enabled 00:03:37.045 with-libvfn : disabled 00:03:37.045 with-spdk : false 00:03:37.045 00:03:37.045 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:37.302 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:37.302 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:37.302 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:37.302 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:37.302 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:37.302 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:37.302 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:37.302 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:37.302 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:37.302 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:37.561 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:37.561 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:37.561 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:37.561 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:37.561 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:37.561 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:37.561 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:37.561 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:37.561 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:37.561 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:37.561 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:37.561 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:37.561 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:37.561 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:37.561 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:37.561 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:37.819 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:37.819 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:37.819 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:37.819 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:37.819 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:37.819 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:37.819 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:37.819 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:37.819 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:37.819 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:37.819 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:37.819 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:37.819 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:37.819 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:37.820 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:37.820 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:37.820 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:37.820 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:37.820 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:37.820 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:37.820 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:37.820 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:37.820 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:37.820 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:37.820 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:37.820 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:37.820 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:37.820 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:38.078 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:38.078 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:38.078 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:38.078 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:38.078 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:38.078 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:38.078 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:38.078 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:38.078 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:38.078 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:38.078 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:38.078 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:38.078 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:38.078 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:38.078 [68/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:38.078 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:38.337 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:38.337 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:38.337 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:38.337 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:38.337 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:38.337 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:38.337 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:38.337 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:38.337 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:38.337 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:38.337 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:38.337 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:38.337 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:38.337 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:38.337 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:38.595 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:38.595 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:38.595 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:38.595 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:38.595 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:38.595 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:38.595 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:38.595 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:38.595 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:38.595 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:38.595 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:38.595 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:38.595 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:38.595 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:38.595 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:38.595 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:38.595 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:38.595 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:38.595 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:38.595 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:38.854 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:38.854 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:38.854 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:38.854 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:38.854 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:38.854 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:38.854 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:38.854 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:38.854 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:38.854 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:38.854 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:38.854 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:38.854 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:38.854 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:38.854 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:38.854 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:38.854 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:38.854 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:38.854 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:38.854 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:38.854 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:38.854 [126/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:38.854 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:38.854 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:38.854 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:38.854 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:38.854 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:39.113 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:39.113 [133/203] Linking target lib/libxnvme.so 00:03:39.113 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:39.113 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:39.113 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:39.113 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:39.113 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:39.113 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:39.113 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:39.113 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:39.113 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:39.113 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:39.113 [144/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:39.371 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:39.371 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:39.371 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:39.371 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:39.371 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:39.371 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:39.371 [151/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:39.371 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:39.371 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:39.371 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:39.371 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:39.371 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:39.630 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:39.630 [158/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:39.630 [159/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:39.630 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:39.630 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:39.630 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:39.630 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:39.630 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:39.630 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:39.630 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:39.630 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:39.630 [168/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:39.630 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:39.888 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:39.888 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:39.888 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:39.888 [173/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:39.888 [174/203] Linking static target lib/libxnvme.a 00:03:39.888 [175/203] Linking target tests/xnvme_tests_enum 00:03:39.888 [176/203] Linking target tests/xnvme_tests_async_intf 00:03:39.888 [177/203] Linking target tests/xnvme_tests_buf 00:03:39.888 [178/203] Linking target tests/xnvme_tests_znd_state 00:03:39.888 [179/203] Linking target tests/xnvme_tests_cli 00:03:39.888 [180/203] Linking target tests/xnvme_tests_xnvme_file 00:03:39.888 [181/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:39.888 [182/203] Linking target tests/xnvme_tests_ioworker 00:03:39.888 [183/203] Linking target tests/xnvme_tests_znd_append 00:03:39.888 [184/203] Linking target tests/xnvme_tests_scc 00:03:39.888 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:39.888 [186/203] Linking target tests/xnvme_tests_lblk 00:03:40.147 [187/203] Linking target tests/xnvme_tests_kvs 00:03:40.147 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:40.147 [189/203] Linking target tools/lblk 00:03:40.147 [190/203] Linking target tools/xdd 00:03:40.147 [191/203] Linking target tests/xnvme_tests_map 00:03:40.147 [192/203] Linking target examples/xnvme_enum 00:03:40.147 [193/203] Linking target examples/xnvme_single_sync 00:03:40.147 [194/203] Linking target tools/xnvme_file 00:03:40.147 [195/203] Linking target tools/xnvme 00:03:40.147 [196/203] Linking target examples/xnvme_single_async 00:03:40.147 [197/203] Linking target examples/xnvme_io_async 00:03:40.147 [198/203] Linking target examples/xnvme_dev 00:03:40.147 [199/203] Linking target tools/zoned 00:03:40.147 [200/203] Linking target tools/kvs 00:03:40.147 [201/203] Linking target examples/xnvme_hello 00:03:40.147 [202/203] Linking target examples/zoned_io_sync 00:03:40.147 [203/203] Linking target examples/zoned_io_async 00:03:40.147 INFO: autodetecting backend as ninja 00:03:40.147 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:40.147 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:58.273 CC lib/ut/ut.o 00:03:58.273 CC lib/log/log_flags.o 00:03:58.273 CC lib/log/log.o 00:03:58.273 CC lib/log/log_deprecated.o 00:03:58.273 CC lib/ut_mock/mock.o 00:03:58.531 LIB libspdk_ut_mock.a 00:03:58.531 LIB libspdk_log.a 00:03:58.531 LIB libspdk_ut.a 00:03:58.531 SO libspdk_ut_mock.so.6.0 00:03:58.531 SO libspdk_ut.so.2.0 00:03:58.531 SO libspdk_log.so.7.0 00:03:58.531 SYMLINK libspdk_ut_mock.so 00:03:58.531 SYMLINK libspdk_ut.so 00:03:58.531 SYMLINK libspdk_log.so 00:03:58.788 CXX lib/trace_parser/trace.o 00:03:58.788 CC lib/dma/dma.o 00:03:58.788 CC lib/util/base64.o 00:03:58.788 CC lib/util/cpuset.o 00:03:58.788 CC lib/util/bit_array.o 00:03:58.788 CC lib/util/crc16.o 00:03:58.788 CC lib/util/crc32.o 00:03:58.788 CC lib/util/crc32c.o 00:03:58.788 CC lib/ioat/ioat.o 00:03:58.788 CC lib/vfio_user/host/vfio_user_pci.o 00:03:59.046 CC lib/util/crc32_ieee.o 00:03:59.046 CC lib/util/crc64.o 00:03:59.046 CC lib/util/dif.o 00:03:59.046 CC lib/util/fd.o 00:03:59.046 LIB libspdk_dma.a 00:03:59.046 CC lib/util/file.o 00:03:59.046 SO libspdk_dma.so.4.0 00:03:59.046 CC lib/vfio_user/host/vfio_user.o 00:03:59.046 CC lib/util/hexlify.o 00:03:59.046 CC lib/util/iov.o 00:03:59.046 SYMLINK libspdk_dma.so 00:03:59.046 CC lib/util/math.o 00:03:59.046 LIB libspdk_ioat.a 00:03:59.046 CC lib/util/pipe.o 00:03:59.304 SO libspdk_ioat.so.7.0 00:03:59.304 CC lib/util/strerror_tls.o 00:03:59.304 CC lib/util/string.o 00:03:59.304 SYMLINK libspdk_ioat.so 00:03:59.304 CC lib/util/uuid.o 00:03:59.304 CC lib/util/fd_group.o 00:03:59.304 CC lib/util/xor.o 00:03:59.304 CC lib/util/zipf.o 00:03:59.304 LIB libspdk_vfio_user.a 00:03:59.304 SO libspdk_vfio_user.so.5.0 00:03:59.304 SYMLINK libspdk_vfio_user.so 00:03:59.871 LIB libspdk_util.a 00:03:59.871 SO libspdk_util.so.9.1 00:03:59.871 LIB libspdk_trace_parser.a 00:03:59.871 SO libspdk_trace_parser.so.5.0 00:03:59.871 SYMLINK libspdk_util.so 00:04:00.129 SYMLINK libspdk_trace_parser.so 00:04:00.129 CC lib/conf/conf.o 00:04:00.129 CC lib/env_dpdk/env.o 00:04:00.129 CC lib/env_dpdk/memory.o 00:04:00.129 CC lib/env_dpdk/pci.o 00:04:00.129 CC lib/env_dpdk/init.o 00:04:00.129 CC lib/vmd/vmd.o 00:04:00.129 CC lib/json/json_parse.o 00:04:00.129 CC lib/rdma_provider/common.o 00:04:00.129 CC lib/rdma_utils/rdma_utils.o 00:04:00.129 CC lib/idxd/idxd.o 00:04:00.387 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:00.387 LIB libspdk_conf.a 00:04:00.387 CC lib/json/json_util.o 00:04:00.387 SO libspdk_conf.so.6.0 00:04:00.387 LIB libspdk_rdma_utils.a 00:04:00.387 SO libspdk_rdma_utils.so.1.0 00:04:00.644 SYMLINK libspdk_conf.so 00:04:00.644 CC lib/json/json_write.o 00:04:00.644 CC lib/idxd/idxd_user.o 00:04:00.644 CC lib/idxd/idxd_kernel.o 00:04:00.644 LIB libspdk_rdma_provider.a 00:04:00.644 SYMLINK libspdk_rdma_utils.so 00:04:00.644 CC lib/env_dpdk/threads.o 00:04:00.644 CC lib/env_dpdk/pci_ioat.o 00:04:00.644 SO libspdk_rdma_provider.so.6.0 00:04:00.644 SYMLINK libspdk_rdma_provider.so 00:04:00.644 CC lib/vmd/led.o 00:04:00.644 CC lib/env_dpdk/pci_virtio.o 00:04:00.644 CC lib/env_dpdk/pci_vmd.o 00:04:00.644 CC lib/env_dpdk/pci_idxd.o 00:04:00.644 CC lib/env_dpdk/pci_event.o 00:04:00.901 CC lib/env_dpdk/sigbus_handler.o 00:04:00.901 CC lib/env_dpdk/pci_dpdk.o 00:04:00.901 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:00.901 LIB libspdk_json.a 00:04:00.901 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:00.901 LIB libspdk_idxd.a 00:04:00.901 SO libspdk_json.so.6.0 00:04:00.901 SO libspdk_idxd.so.12.0 00:04:00.901 SYMLINK libspdk_json.so 00:04:00.901 LIB libspdk_vmd.a 00:04:00.901 SYMLINK libspdk_idxd.so 00:04:01.159 SO libspdk_vmd.so.6.0 00:04:01.159 SYMLINK libspdk_vmd.so 00:04:01.159 CC lib/jsonrpc/jsonrpc_server.o 00:04:01.159 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:01.159 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:01.159 CC lib/jsonrpc/jsonrpc_client.o 00:04:01.418 LIB libspdk_jsonrpc.a 00:04:01.418 SO libspdk_jsonrpc.so.6.0 00:04:01.676 SYMLINK libspdk_jsonrpc.so 00:04:01.934 CC lib/rpc/rpc.o 00:04:01.934 LIB libspdk_env_dpdk.a 00:04:01.934 SO libspdk_env_dpdk.so.14.1 00:04:02.193 LIB libspdk_rpc.a 00:04:02.193 SYMLINK libspdk_env_dpdk.so 00:04:02.193 SO libspdk_rpc.so.6.0 00:04:02.193 SYMLINK libspdk_rpc.so 00:04:02.451 CC lib/trace/trace.o 00:04:02.451 CC lib/trace/trace_rpc.o 00:04:02.451 CC lib/trace/trace_flags.o 00:04:02.451 CC lib/keyring/keyring.o 00:04:02.451 CC lib/keyring/keyring_rpc.o 00:04:02.451 CC lib/notify/notify.o 00:04:02.451 CC lib/notify/notify_rpc.o 00:04:02.709 LIB libspdk_notify.a 00:04:02.709 SO libspdk_notify.so.6.0 00:04:02.709 LIB libspdk_keyring.a 00:04:02.709 SO libspdk_keyring.so.1.0 00:04:02.709 SYMLINK libspdk_notify.so 00:04:02.709 LIB libspdk_trace.a 00:04:02.968 SYMLINK libspdk_keyring.so 00:04:02.968 SO libspdk_trace.so.10.0 00:04:02.968 SYMLINK libspdk_trace.so 00:04:03.227 CC lib/thread/thread.o 00:04:03.227 CC lib/sock/sock.o 00:04:03.227 CC lib/thread/iobuf.o 00:04:03.227 CC lib/sock/sock_rpc.o 00:04:03.794 LIB libspdk_sock.a 00:04:03.794 SO libspdk_sock.so.10.0 00:04:04.053 SYMLINK libspdk_sock.so 00:04:04.311 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:04.311 CC lib/nvme/nvme_ctrlr.o 00:04:04.311 CC lib/nvme/nvme_fabric.o 00:04:04.311 CC lib/nvme/nvme_ns.o 00:04:04.311 CC lib/nvme/nvme_ns_cmd.o 00:04:04.311 CC lib/nvme/nvme_pcie_common.o 00:04:04.311 CC lib/nvme/nvme_pcie.o 00:04:04.311 CC lib/nvme/nvme_qpair.o 00:04:04.311 CC lib/nvme/nvme.o 00:04:05.245 CC lib/nvme/nvme_quirks.o 00:04:05.245 CC lib/nvme/nvme_transport.o 00:04:05.245 CC lib/nvme/nvme_discovery.o 00:04:05.245 LIB libspdk_thread.a 00:04:05.245 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:05.245 SO libspdk_thread.so.10.1 00:04:05.245 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:05.245 CC lib/nvme/nvme_tcp.o 00:04:05.245 SYMLINK libspdk_thread.so 00:04:05.245 CC lib/nvme/nvme_opal.o 00:04:05.245 CC lib/nvme/nvme_io_msg.o 00:04:05.503 CC lib/nvme/nvme_poll_group.o 00:04:05.762 CC lib/nvme/nvme_zns.o 00:04:05.762 CC lib/nvme/nvme_stubs.o 00:04:06.021 CC lib/nvme/nvme_auth.o 00:04:06.021 CC lib/nvme/nvme_cuse.o 00:04:06.021 CC lib/accel/accel.o 00:04:06.021 CC lib/blob/blobstore.o 00:04:06.021 CC lib/init/json_config.o 00:04:06.279 CC lib/blob/request.o 00:04:06.279 CC lib/virtio/virtio.o 00:04:06.279 CC lib/virtio/virtio_vhost_user.o 00:04:06.538 CC lib/init/subsystem.o 00:04:06.538 CC lib/init/subsystem_rpc.o 00:04:06.795 CC lib/init/rpc.o 00:04:06.795 CC lib/accel/accel_rpc.o 00:04:06.795 CC lib/virtio/virtio_vfio_user.o 00:04:06.795 CC lib/accel/accel_sw.o 00:04:06.795 LIB libspdk_init.a 00:04:06.795 SO libspdk_init.so.5.0 00:04:07.053 CC lib/blob/zeroes.o 00:04:07.053 SYMLINK libspdk_init.so 00:04:07.053 CC lib/nvme/nvme_rdma.o 00:04:07.053 CC lib/blob/blob_bs_dev.o 00:04:07.053 CC lib/virtio/virtio_pci.o 00:04:07.310 CC lib/event/app.o 00:04:07.310 CC lib/event/reactor.o 00:04:07.310 CC lib/event/app_rpc.o 00:04:07.310 CC lib/event/log_rpc.o 00:04:07.310 CC lib/event/scheduler_static.o 00:04:07.310 LIB libspdk_accel.a 00:04:07.310 SO libspdk_accel.so.15.1 00:04:07.310 LIB libspdk_virtio.a 00:04:07.310 SYMLINK libspdk_accel.so 00:04:07.310 SO libspdk_virtio.so.7.0 00:04:07.569 SYMLINK libspdk_virtio.so 00:04:07.569 CC lib/bdev/bdev.o 00:04:07.569 CC lib/bdev/bdev_rpc.o 00:04:07.569 CC lib/bdev/bdev_zone.o 00:04:07.569 CC lib/bdev/scsi_nvme.o 00:04:07.569 CC lib/bdev/part.o 00:04:07.829 LIB libspdk_event.a 00:04:07.829 SO libspdk_event.so.14.0 00:04:07.829 SYMLINK libspdk_event.so 00:04:08.767 LIB libspdk_nvme.a 00:04:08.767 SO libspdk_nvme.so.13.1 00:04:09.036 SYMLINK libspdk_nvme.so 00:04:09.988 LIB libspdk_blob.a 00:04:09.988 SO libspdk_blob.so.11.0 00:04:10.247 SYMLINK libspdk_blob.so 00:04:10.506 CC lib/blobfs/blobfs.o 00:04:10.506 CC lib/blobfs/tree.o 00:04:10.506 CC lib/lvol/lvol.o 00:04:10.765 LIB libspdk_bdev.a 00:04:11.023 SO libspdk_bdev.so.15.1 00:04:11.024 SYMLINK libspdk_bdev.so 00:04:11.282 CC lib/nvmf/ctrlr_discovery.o 00:04:11.282 CC lib/nvmf/ctrlr.o 00:04:11.282 CC lib/nvmf/ctrlr_bdev.o 00:04:11.282 CC lib/nvmf/subsystem.o 00:04:11.282 CC lib/ublk/ublk.o 00:04:11.282 CC lib/scsi/dev.o 00:04:11.282 CC lib/nbd/nbd.o 00:04:11.282 CC lib/ftl/ftl_core.o 00:04:11.541 LIB libspdk_blobfs.a 00:04:11.541 SO libspdk_blobfs.so.10.0 00:04:11.541 CC lib/scsi/lun.o 00:04:11.541 SYMLINK libspdk_blobfs.so 00:04:11.541 CC lib/scsi/port.o 00:04:11.800 LIB libspdk_lvol.a 00:04:11.800 SO libspdk_lvol.so.10.0 00:04:11.800 SYMLINK libspdk_lvol.so 00:04:11.800 CC lib/scsi/scsi.o 00:04:11.800 CC lib/scsi/scsi_bdev.o 00:04:11.800 CC lib/ftl/ftl_init.o 00:04:11.800 CC lib/nbd/nbd_rpc.o 00:04:12.059 CC lib/ftl/ftl_layout.o 00:04:12.059 CC lib/ftl/ftl_debug.o 00:04:12.059 CC lib/ftl/ftl_io.o 00:04:12.059 LIB libspdk_nbd.a 00:04:12.059 CC lib/ublk/ublk_rpc.o 00:04:12.059 CC lib/ftl/ftl_sb.o 00:04:12.059 SO libspdk_nbd.so.7.0 00:04:12.059 CC lib/nvmf/nvmf.o 00:04:12.317 SYMLINK libspdk_nbd.so 00:04:12.317 CC lib/ftl/ftl_l2p.o 00:04:12.317 CC lib/ftl/ftl_l2p_flat.o 00:04:12.317 CC lib/ftl/ftl_nv_cache.o 00:04:12.317 LIB libspdk_ublk.a 00:04:12.317 CC lib/scsi/scsi_pr.o 00:04:12.317 CC lib/scsi/scsi_rpc.o 00:04:12.317 SO libspdk_ublk.so.3.0 00:04:12.317 SYMLINK libspdk_ublk.so 00:04:12.318 CC lib/nvmf/nvmf_rpc.o 00:04:12.318 CC lib/nvmf/transport.o 00:04:12.576 CC lib/nvmf/tcp.o 00:04:12.576 CC lib/ftl/ftl_band.o 00:04:12.576 CC lib/ftl/ftl_band_ops.o 00:04:12.576 CC lib/scsi/task.o 00:04:12.834 CC lib/nvmf/stubs.o 00:04:12.834 CC lib/nvmf/mdns_server.o 00:04:12.834 CC lib/nvmf/rdma.o 00:04:12.834 LIB libspdk_scsi.a 00:04:13.093 SO libspdk_scsi.so.9.0 00:04:13.093 SYMLINK libspdk_scsi.so 00:04:13.093 CC lib/nvmf/auth.o 00:04:13.351 CC lib/ftl/ftl_writer.o 00:04:13.351 CC lib/iscsi/conn.o 00:04:13.351 CC lib/iscsi/init_grp.o 00:04:13.351 CC lib/ftl/ftl_rq.o 00:04:13.610 CC lib/iscsi/iscsi.o 00:04:13.610 CC lib/ftl/ftl_reloc.o 00:04:13.610 CC lib/ftl/ftl_l2p_cache.o 00:04:13.610 CC lib/vhost/vhost.o 00:04:13.610 CC lib/vhost/vhost_rpc.o 00:04:13.610 CC lib/iscsi/md5.o 00:04:13.868 CC lib/vhost/vhost_scsi.o 00:04:14.126 CC lib/vhost/vhost_blk.o 00:04:14.126 CC lib/vhost/rte_vhost_user.o 00:04:14.126 CC lib/iscsi/param.o 00:04:14.126 CC lib/ftl/ftl_p2l.o 00:04:14.384 CC lib/ftl/mngt/ftl_mngt.o 00:04:14.384 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:14.384 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:14.642 CC lib/iscsi/portal_grp.o 00:04:14.642 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:14.642 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:14.642 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:14.642 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:14.901 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:14.902 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:14.902 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:14.902 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:14.902 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:14.902 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:15.161 CC lib/iscsi/tgt_node.o 00:04:15.161 CC lib/ftl/utils/ftl_conf.o 00:04:15.161 CC lib/iscsi/iscsi_subsystem.o 00:04:15.161 CC lib/ftl/utils/ftl_md.o 00:04:15.161 CC lib/ftl/utils/ftl_mempool.o 00:04:15.161 CC lib/ftl/utils/ftl_bitmap.o 00:04:15.420 LIB libspdk_vhost.a 00:04:15.420 CC lib/iscsi/iscsi_rpc.o 00:04:15.420 CC lib/ftl/utils/ftl_property.o 00:04:15.420 SO libspdk_vhost.so.8.0 00:04:15.420 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:15.420 CC lib/iscsi/task.o 00:04:15.420 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:15.679 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:15.679 SYMLINK libspdk_vhost.so 00:04:15.679 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:15.679 LIB libspdk_nvmf.a 00:04:15.679 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:15.679 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:15.679 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:15.679 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:15.938 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:15.938 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:15.938 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:15.938 SO libspdk_nvmf.so.18.1 00:04:15.938 CC lib/ftl/base/ftl_base_dev.o 00:04:15.938 LIB libspdk_iscsi.a 00:04:15.938 CC lib/ftl/base/ftl_base_bdev.o 00:04:15.938 CC lib/ftl/ftl_trace.o 00:04:15.938 SO libspdk_iscsi.so.8.0 00:04:16.201 SYMLINK libspdk_nvmf.so 00:04:16.201 SYMLINK libspdk_iscsi.so 00:04:16.201 LIB libspdk_ftl.a 00:04:16.458 SO libspdk_ftl.so.9.0 00:04:16.716 SYMLINK libspdk_ftl.so 00:04:17.279 CC module/env_dpdk/env_dpdk_rpc.o 00:04:17.279 CC module/sock/posix/posix.o 00:04:17.279 CC module/accel/dsa/accel_dsa.o 00:04:17.279 CC module/accel/error/accel_error.o 00:04:17.279 CC module/keyring/file/keyring.o 00:04:17.279 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:17.279 CC module/accel/ioat/accel_ioat.o 00:04:17.279 CC module/keyring/linux/keyring.o 00:04:17.279 CC module/accel/iaa/accel_iaa.o 00:04:17.279 CC module/blob/bdev/blob_bdev.o 00:04:17.279 LIB libspdk_env_dpdk_rpc.a 00:04:17.279 SO libspdk_env_dpdk_rpc.so.6.0 00:04:17.279 CC module/keyring/file/keyring_rpc.o 00:04:17.279 CC module/keyring/linux/keyring_rpc.o 00:04:17.279 SYMLINK libspdk_env_dpdk_rpc.so 00:04:17.279 CC module/accel/iaa/accel_iaa_rpc.o 00:04:17.537 CC module/accel/error/accel_error_rpc.o 00:04:17.537 LIB libspdk_scheduler_dynamic.a 00:04:17.537 CC module/accel/ioat/accel_ioat_rpc.o 00:04:17.537 SO libspdk_scheduler_dynamic.so.4.0 00:04:17.537 CC module/accel/dsa/accel_dsa_rpc.o 00:04:17.537 LIB libspdk_keyring_linux.a 00:04:17.537 LIB libspdk_keyring_file.a 00:04:17.537 LIB libspdk_accel_iaa.a 00:04:17.537 SYMLINK libspdk_scheduler_dynamic.so 00:04:17.537 SO libspdk_keyring_linux.so.1.0 00:04:17.537 SO libspdk_keyring_file.so.1.0 00:04:17.537 LIB libspdk_accel_error.a 00:04:17.537 SO libspdk_accel_iaa.so.3.0 00:04:17.537 LIB libspdk_blob_bdev.a 00:04:17.537 LIB libspdk_accel_ioat.a 00:04:17.537 SO libspdk_blob_bdev.so.11.0 00:04:17.537 SO libspdk_accel_error.so.2.0 00:04:17.537 SO libspdk_accel_ioat.so.6.0 00:04:17.537 SYMLINK libspdk_keyring_linux.so 00:04:17.537 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:17.537 SYMLINK libspdk_accel_iaa.so 00:04:17.537 SYMLINK libspdk_keyring_file.so 00:04:17.537 LIB libspdk_accel_dsa.a 00:04:17.794 SYMLINK libspdk_blob_bdev.so 00:04:17.794 SYMLINK libspdk_accel_error.so 00:04:17.794 SYMLINK libspdk_accel_ioat.so 00:04:17.794 SO libspdk_accel_dsa.so.5.0 00:04:17.794 CC module/scheduler/gscheduler/gscheduler.o 00:04:17.794 SYMLINK libspdk_accel_dsa.so 00:04:17.794 LIB libspdk_scheduler_dpdk_governor.a 00:04:17.794 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:18.051 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:18.051 CC module/bdev/delay/vbdev_delay.o 00:04:18.051 LIB libspdk_scheduler_gscheduler.a 00:04:18.051 CC module/bdev/malloc/bdev_malloc.o 00:04:18.051 CC module/bdev/null/bdev_null.o 00:04:18.051 CC module/bdev/error/vbdev_error.o 00:04:18.051 CC module/bdev/lvol/vbdev_lvol.o 00:04:18.051 CC module/bdev/gpt/gpt.o 00:04:18.051 CC module/blobfs/bdev/blobfs_bdev.o 00:04:18.051 SO libspdk_scheduler_gscheduler.so.4.0 00:04:18.051 SYMLINK libspdk_scheduler_gscheduler.so 00:04:18.051 CC module/bdev/gpt/vbdev_gpt.o 00:04:18.051 CC module/bdev/nvme/bdev_nvme.o 00:04:18.051 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:18.309 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:18.309 LIB libspdk_sock_posix.a 00:04:18.309 CC module/bdev/null/bdev_null_rpc.o 00:04:18.309 CC module/bdev/error/vbdev_error_rpc.o 00:04:18.309 SO libspdk_sock_posix.so.6.0 00:04:18.309 LIB libspdk_blobfs_bdev.a 00:04:18.309 LIB libspdk_bdev_gpt.a 00:04:18.309 SYMLINK libspdk_sock_posix.so 00:04:18.309 CC module/bdev/nvme/nvme_rpc.o 00:04:18.309 SO libspdk_blobfs_bdev.so.6.0 00:04:18.309 SO libspdk_bdev_gpt.so.6.0 00:04:18.309 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:18.309 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:18.567 LIB libspdk_bdev_error.a 00:04:18.567 LIB libspdk_bdev_null.a 00:04:18.567 SO libspdk_bdev_error.so.6.0 00:04:18.567 SYMLINK libspdk_blobfs_bdev.so 00:04:18.567 SO libspdk_bdev_null.so.6.0 00:04:18.567 SYMLINK libspdk_bdev_gpt.so 00:04:18.567 CC module/bdev/nvme/bdev_mdns_client.o 00:04:18.567 CC module/bdev/nvme/vbdev_opal.o 00:04:18.567 SYMLINK libspdk_bdev_null.so 00:04:18.567 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:18.567 LIB libspdk_bdev_malloc.a 00:04:18.567 SYMLINK libspdk_bdev_error.so 00:04:18.567 LIB libspdk_bdev_delay.a 00:04:18.567 SO libspdk_bdev_malloc.so.6.0 00:04:18.567 SO libspdk_bdev_delay.so.6.0 00:04:18.825 SYMLINK libspdk_bdev_delay.so 00:04:18.825 SYMLINK libspdk_bdev_malloc.so 00:04:18.825 CC module/bdev/passthru/vbdev_passthru.o 00:04:18.825 CC module/bdev/raid/bdev_raid.o 00:04:18.825 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:18.825 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:18.825 CC module/bdev/split/vbdev_split.o 00:04:18.825 CC module/bdev/xnvme/bdev_xnvme.o 00:04:18.825 CC module/bdev/aio/bdev_aio.o 00:04:19.083 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:19.083 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:19.083 LIB libspdk_bdev_lvol.a 00:04:19.083 SO libspdk_bdev_lvol.so.6.0 00:04:19.083 LIB libspdk_bdev_passthru.a 00:04:19.083 CC module/bdev/split/vbdev_split_rpc.o 00:04:19.083 SO libspdk_bdev_passthru.so.6.0 00:04:19.083 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:19.083 SYMLINK libspdk_bdev_lvol.so 00:04:19.341 SYMLINK libspdk_bdev_passthru.so 00:04:19.341 CC module/bdev/raid/bdev_raid_rpc.o 00:04:19.341 LIB libspdk_bdev_zone_block.a 00:04:19.341 SO libspdk_bdev_zone_block.so.6.0 00:04:19.341 LIB libspdk_bdev_split.a 00:04:19.341 CC module/bdev/ftl/bdev_ftl.o 00:04:19.341 LIB libspdk_bdev_xnvme.a 00:04:19.341 CC module/bdev/aio/bdev_aio_rpc.o 00:04:19.341 SO libspdk_bdev_split.so.6.0 00:04:19.341 SO libspdk_bdev_xnvme.so.3.0 00:04:19.341 CC module/bdev/iscsi/bdev_iscsi.o 00:04:19.341 SYMLINK libspdk_bdev_zone_block.so 00:04:19.341 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:19.341 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:19.341 SYMLINK libspdk_bdev_split.so 00:04:19.341 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.341 SYMLINK libspdk_bdev_xnvme.so 00:04:19.341 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:19.599 LIB libspdk_bdev_aio.a 00:04:19.599 CC module/bdev/raid/bdev_raid_sb.o 00:04:19.599 SO libspdk_bdev_aio.so.6.0 00:04:19.599 CC module/bdev/raid/raid0.o 00:04:19.599 SYMLINK libspdk_bdev_aio.so 00:04:19.599 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:19.599 CC module/bdev/raid/raid1.o 00:04:19.599 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:19.857 CC module/bdev/raid/concat.o 00:04:19.857 LIB libspdk_bdev_iscsi.a 00:04:19.857 SO libspdk_bdev_iscsi.so.6.0 00:04:19.857 LIB libspdk_bdev_ftl.a 00:04:19.857 SYMLINK libspdk_bdev_iscsi.so 00:04:19.857 SO libspdk_bdev_ftl.so.6.0 00:04:20.115 SYMLINK libspdk_bdev_ftl.so 00:04:20.115 LIB libspdk_bdev_raid.a 00:04:20.115 LIB libspdk_bdev_virtio.a 00:04:20.115 SO libspdk_bdev_raid.so.6.0 00:04:20.115 SO libspdk_bdev_virtio.so.6.0 00:04:20.115 SYMLINK libspdk_bdev_raid.so 00:04:20.373 SYMLINK libspdk_bdev_virtio.so 00:04:20.941 LIB libspdk_bdev_nvme.a 00:04:20.941 SO libspdk_bdev_nvme.so.7.0 00:04:21.199 SYMLINK libspdk_bdev_nvme.so 00:04:21.766 CC module/event/subsystems/scheduler/scheduler.o 00:04:21.766 CC module/event/subsystems/vmd/vmd.o 00:04:21.766 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:21.766 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:21.766 CC module/event/subsystems/keyring/keyring.o 00:04:21.766 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:21.766 CC module/event/subsystems/iobuf/iobuf.o 00:04:21.766 CC module/event/subsystems/sock/sock.o 00:04:21.766 LIB libspdk_event_scheduler.a 00:04:21.766 LIB libspdk_event_vmd.a 00:04:21.766 LIB libspdk_event_keyring.a 00:04:21.766 LIB libspdk_event_vhost_blk.a 00:04:21.766 SO libspdk_event_scheduler.so.4.0 00:04:21.766 LIB libspdk_event_sock.a 00:04:21.766 SO libspdk_event_vmd.so.6.0 00:04:21.766 SO libspdk_event_keyring.so.1.0 00:04:21.766 LIB libspdk_event_iobuf.a 00:04:21.766 SO libspdk_event_vhost_blk.so.3.0 00:04:21.766 SO libspdk_event_sock.so.5.0 00:04:21.766 SO libspdk_event_iobuf.so.3.0 00:04:21.766 SYMLINK libspdk_event_keyring.so 00:04:21.766 SYMLINK libspdk_event_vhost_blk.so 00:04:22.025 SYMLINK libspdk_event_scheduler.so 00:04:22.025 SYMLINK libspdk_event_vmd.so 00:04:22.025 SYMLINK libspdk_event_sock.so 00:04:22.025 SYMLINK libspdk_event_iobuf.so 00:04:22.283 CC module/event/subsystems/accel/accel.o 00:04:22.283 LIB libspdk_event_accel.a 00:04:22.542 SO libspdk_event_accel.so.6.0 00:04:22.542 SYMLINK libspdk_event_accel.so 00:04:22.800 CC module/event/subsystems/bdev/bdev.o 00:04:23.059 LIB libspdk_event_bdev.a 00:04:23.059 SO libspdk_event_bdev.so.6.0 00:04:23.059 SYMLINK libspdk_event_bdev.so 00:04:23.318 CC module/event/subsystems/nbd/nbd.o 00:04:23.318 CC module/event/subsystems/ublk/ublk.o 00:04:23.318 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:23.318 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:23.318 CC module/event/subsystems/scsi/scsi.o 00:04:23.577 LIB libspdk_event_nbd.a 00:04:23.577 LIB libspdk_event_ublk.a 00:04:23.577 LIB libspdk_event_scsi.a 00:04:23.577 SO libspdk_event_ublk.so.3.0 00:04:23.577 SO libspdk_event_nbd.so.6.0 00:04:23.577 SO libspdk_event_scsi.so.6.0 00:04:23.577 LIB libspdk_event_nvmf.a 00:04:23.577 SYMLINK libspdk_event_ublk.so 00:04:23.577 SYMLINK libspdk_event_nbd.so 00:04:23.577 SYMLINK libspdk_event_scsi.so 00:04:23.577 SO libspdk_event_nvmf.so.6.0 00:04:23.835 SYMLINK libspdk_event_nvmf.so 00:04:23.835 CC module/event/subsystems/iscsi/iscsi.o 00:04:23.835 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:24.102 LIB libspdk_event_vhost_scsi.a 00:04:24.102 LIB libspdk_event_iscsi.a 00:04:24.102 SO libspdk_event_vhost_scsi.so.3.0 00:04:24.102 SO libspdk_event_iscsi.so.6.0 00:04:24.102 SYMLINK libspdk_event_vhost_scsi.so 00:04:24.102 SYMLINK libspdk_event_iscsi.so 00:04:24.409 SO libspdk.so.6.0 00:04:24.409 SYMLINK libspdk.so 00:04:24.668 CC app/trace_record/trace_record.o 00:04:24.668 CXX app/trace/trace.o 00:04:24.668 CC app/spdk_nvme_perf/perf.o 00:04:24.668 CC app/spdk_lspci/spdk_lspci.o 00:04:24.668 CC app/iscsi_tgt/iscsi_tgt.o 00:04:24.668 CC app/nvmf_tgt/nvmf_main.o 00:04:24.668 CC app/spdk_tgt/spdk_tgt.o 00:04:24.668 CC test/thread/poller_perf/poller_perf.o 00:04:24.668 CC examples/ioat/perf/perf.o 00:04:24.668 CC examples/util/zipf/zipf.o 00:04:24.668 LINK spdk_lspci 00:04:24.926 LINK nvmf_tgt 00:04:24.926 LINK spdk_trace_record 00:04:24.926 LINK poller_perf 00:04:24.926 LINK iscsi_tgt 00:04:24.926 LINK zipf 00:04:24.927 LINK spdk_tgt 00:04:24.927 LINK ioat_perf 00:04:25.185 CC app/spdk_nvme_identify/identify.o 00:04:25.185 LINK spdk_trace 00:04:25.185 CC app/spdk_nvme_discover/discovery_aer.o 00:04:25.185 CC app/spdk_top/spdk_top.o 00:04:25.185 CC examples/ioat/verify/verify.o 00:04:25.185 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:25.185 CC test/dma/test_dma/test_dma.o 00:04:25.442 CC app/spdk_dd/spdk_dd.o 00:04:25.442 CC app/fio/nvme/fio_plugin.o 00:04:25.442 LINK interrupt_tgt 00:04:25.442 LINK spdk_nvme_discover 00:04:25.442 LINK verify 00:04:25.442 CC test/app/bdev_svc/bdev_svc.o 00:04:25.701 LINK spdk_nvme_perf 00:04:25.701 LINK test_dma 00:04:25.701 CC test/app/histogram_perf/histogram_perf.o 00:04:25.701 LINK bdev_svc 00:04:25.701 LINK spdk_dd 00:04:25.701 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:25.960 CC examples/thread/thread/thread_ex.o 00:04:25.960 LINK histogram_perf 00:04:25.960 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:25.960 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:25.960 TEST_HEADER include/spdk/accel.h 00:04:25.960 TEST_HEADER include/spdk/accel_module.h 00:04:25.960 TEST_HEADER include/spdk/assert.h 00:04:26.218 TEST_HEADER include/spdk/barrier.h 00:04:26.218 TEST_HEADER include/spdk/base64.h 00:04:26.218 TEST_HEADER include/spdk/bdev.h 00:04:26.218 TEST_HEADER include/spdk/bdev_module.h 00:04:26.218 LINK spdk_nvme_identify 00:04:26.218 TEST_HEADER include/spdk/bdev_zone.h 00:04:26.218 TEST_HEADER include/spdk/bit_array.h 00:04:26.218 TEST_HEADER include/spdk/bit_pool.h 00:04:26.218 TEST_HEADER include/spdk/blob_bdev.h 00:04:26.218 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:26.218 TEST_HEADER include/spdk/blobfs.h 00:04:26.218 TEST_HEADER include/spdk/blob.h 00:04:26.218 TEST_HEADER include/spdk/conf.h 00:04:26.218 LINK spdk_nvme 00:04:26.218 TEST_HEADER include/spdk/config.h 00:04:26.218 TEST_HEADER include/spdk/cpuset.h 00:04:26.218 TEST_HEADER include/spdk/crc16.h 00:04:26.218 TEST_HEADER include/spdk/crc32.h 00:04:26.218 TEST_HEADER include/spdk/crc64.h 00:04:26.218 TEST_HEADER include/spdk/dif.h 00:04:26.218 TEST_HEADER include/spdk/dma.h 00:04:26.218 TEST_HEADER include/spdk/endian.h 00:04:26.218 CC test/app/jsoncat/jsoncat.o 00:04:26.218 TEST_HEADER include/spdk/env_dpdk.h 00:04:26.218 TEST_HEADER include/spdk/env.h 00:04:26.218 TEST_HEADER include/spdk/event.h 00:04:26.218 TEST_HEADER include/spdk/fd_group.h 00:04:26.218 TEST_HEADER include/spdk/fd.h 00:04:26.218 TEST_HEADER include/spdk/file.h 00:04:26.218 TEST_HEADER include/spdk/ftl.h 00:04:26.218 TEST_HEADER include/spdk/gpt_spec.h 00:04:26.218 TEST_HEADER include/spdk/hexlify.h 00:04:26.218 LINK thread 00:04:26.218 TEST_HEADER include/spdk/histogram_data.h 00:04:26.218 TEST_HEADER include/spdk/idxd.h 00:04:26.218 TEST_HEADER include/spdk/idxd_spec.h 00:04:26.218 TEST_HEADER include/spdk/init.h 00:04:26.218 TEST_HEADER include/spdk/ioat.h 00:04:26.218 TEST_HEADER include/spdk/ioat_spec.h 00:04:26.218 TEST_HEADER include/spdk/iscsi_spec.h 00:04:26.218 TEST_HEADER include/spdk/json.h 00:04:26.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:26.218 TEST_HEADER include/spdk/jsonrpc.h 00:04:26.218 TEST_HEADER include/spdk/keyring.h 00:04:26.218 TEST_HEADER include/spdk/keyring_module.h 00:04:26.218 TEST_HEADER include/spdk/likely.h 00:04:26.218 TEST_HEADER include/spdk/log.h 00:04:26.218 TEST_HEADER include/spdk/lvol.h 00:04:26.218 TEST_HEADER include/spdk/memory.h 00:04:26.218 TEST_HEADER include/spdk/mmio.h 00:04:26.218 TEST_HEADER include/spdk/nbd.h 00:04:26.218 TEST_HEADER include/spdk/notify.h 00:04:26.218 TEST_HEADER include/spdk/nvme.h 00:04:26.218 TEST_HEADER include/spdk/nvme_intel.h 00:04:26.218 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:26.218 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:26.218 TEST_HEADER include/spdk/nvme_spec.h 00:04:26.218 TEST_HEADER include/spdk/nvme_zns.h 00:04:26.218 CC test/env/mem_callbacks/mem_callbacks.o 00:04:26.218 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:26.218 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:26.218 TEST_HEADER include/spdk/nvmf.h 00:04:26.218 LINK jsoncat 00:04:26.218 TEST_HEADER include/spdk/nvmf_spec.h 00:04:26.218 TEST_HEADER include/spdk/nvmf_transport.h 00:04:26.218 TEST_HEADER include/spdk/opal.h 00:04:26.218 TEST_HEADER include/spdk/opal_spec.h 00:04:26.218 TEST_HEADER include/spdk/pci_ids.h 00:04:26.218 TEST_HEADER include/spdk/pipe.h 00:04:26.218 TEST_HEADER include/spdk/queue.h 00:04:26.218 TEST_HEADER include/spdk/reduce.h 00:04:26.218 LINK nvme_fuzz 00:04:26.218 TEST_HEADER include/spdk/rpc.h 00:04:26.218 TEST_HEADER include/spdk/scheduler.h 00:04:26.476 TEST_HEADER include/spdk/scsi.h 00:04:26.476 TEST_HEADER include/spdk/scsi_spec.h 00:04:26.476 TEST_HEADER include/spdk/sock.h 00:04:26.476 TEST_HEADER include/spdk/stdinc.h 00:04:26.476 TEST_HEADER include/spdk/string.h 00:04:26.476 TEST_HEADER include/spdk/thread.h 00:04:26.476 TEST_HEADER include/spdk/trace.h 00:04:26.476 TEST_HEADER include/spdk/trace_parser.h 00:04:26.476 TEST_HEADER include/spdk/tree.h 00:04:26.476 TEST_HEADER include/spdk/ublk.h 00:04:26.476 TEST_HEADER include/spdk/util.h 00:04:26.476 TEST_HEADER include/spdk/uuid.h 00:04:26.476 TEST_HEADER include/spdk/version.h 00:04:26.476 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:26.476 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:26.476 TEST_HEADER include/spdk/vhost.h 00:04:26.476 TEST_HEADER include/spdk/vmd.h 00:04:26.476 TEST_HEADER include/spdk/xor.h 00:04:26.476 CC app/fio/bdev/fio_plugin.o 00:04:26.476 TEST_HEADER include/spdk/zipf.h 00:04:26.476 CXX test/cpp_headers/accel.o 00:04:26.476 CC test/app/stub/stub.o 00:04:26.476 LINK spdk_top 00:04:26.476 LINK mem_callbacks 00:04:26.476 CXX test/cpp_headers/accel_module.o 00:04:26.734 LINK stub 00:04:26.734 CC examples/sock/hello_world/hello_sock.o 00:04:26.734 CXX test/cpp_headers/assert.o 00:04:26.734 CXX test/cpp_headers/barrier.o 00:04:26.734 CC test/env/vtophys/vtophys.o 00:04:26.734 CC examples/vmd/lsvmd/lsvmd.o 00:04:26.734 LINK vhost_fuzz 00:04:26.734 CXX test/cpp_headers/base64.o 00:04:26.734 LINK lsvmd 00:04:26.734 LINK vtophys 00:04:26.992 CC test/event/event_perf/event_perf.o 00:04:26.992 CC test/event/reactor/reactor.o 00:04:26.992 CC test/event/reactor_perf/reactor_perf.o 00:04:26.992 LINK hello_sock 00:04:26.992 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:26.992 CXX test/cpp_headers/bdev.o 00:04:26.992 LINK spdk_bdev 00:04:26.992 LINK event_perf 00:04:26.992 LINK reactor 00:04:26.992 CC test/env/memory/memory_ut.o 00:04:26.992 LINK reactor_perf 00:04:26.992 CC examples/vmd/led/led.o 00:04:27.249 CXX test/cpp_headers/bdev_module.o 00:04:27.249 LINK env_dpdk_post_init 00:04:27.249 CC app/vhost/vhost.o 00:04:27.249 CC test/env/pci/pci_ut.o 00:04:27.249 CC test/event/app_repeat/app_repeat.o 00:04:27.249 LINK led 00:04:27.249 CXX test/cpp_headers/bdev_zone.o 00:04:27.249 CC test/event/scheduler/scheduler.o 00:04:27.506 CC test/nvme/aer/aer.o 00:04:27.506 LINK app_repeat 00:04:27.506 LINK vhost 00:04:27.506 CC examples/idxd/perf/perf.o 00:04:27.506 CXX test/cpp_headers/bit_array.o 00:04:27.506 CC test/nvme/reset/reset.o 00:04:27.506 LINK scheduler 00:04:27.763 CXX test/cpp_headers/bit_pool.o 00:04:27.763 CC test/rpc_client/rpc_client_test.o 00:04:27.763 LINK pci_ut 00:04:27.763 LINK aer 00:04:27.763 CC test/nvme/sgl/sgl.o 00:04:28.022 LINK reset 00:04:28.022 CXX test/cpp_headers/blob_bdev.o 00:04:28.022 LINK idxd_perf 00:04:28.022 CC test/nvme/e2edp/nvme_dp.o 00:04:28.022 LINK rpc_client_test 00:04:28.022 LINK memory_ut 00:04:28.022 CC test/nvme/overhead/overhead.o 00:04:28.022 CXX test/cpp_headers/blobfs_bdev.o 00:04:28.022 LINK sgl 00:04:28.280 CC test/nvme/err_injection/err_injection.o 00:04:28.280 LINK iscsi_fuzz 00:04:28.280 CC test/nvme/reserve/reserve.o 00:04:28.280 CC test/nvme/startup/startup.o 00:04:28.280 CC examples/accel/perf/accel_perf.o 00:04:28.280 LINK nvme_dp 00:04:28.280 CXX test/cpp_headers/blobfs.o 00:04:28.280 LINK err_injection 00:04:28.280 LINK overhead 00:04:28.280 CC test/nvme/simple_copy/simple_copy.o 00:04:28.280 CC test/accel/dif/dif.o 00:04:28.538 LINK startup 00:04:28.538 LINK reserve 00:04:28.538 CXX test/cpp_headers/blob.o 00:04:28.538 CC test/nvme/connect_stress/connect_stress.o 00:04:28.538 CXX test/cpp_headers/conf.o 00:04:28.538 CXX test/cpp_headers/config.o 00:04:28.796 LINK simple_copy 00:04:28.796 CXX test/cpp_headers/cpuset.o 00:04:28.796 CC test/blobfs/mkfs/mkfs.o 00:04:28.796 CC examples/nvme/hello_world/hello_world.o 00:04:28.796 LINK accel_perf 00:04:28.796 LINK connect_stress 00:04:28.796 CC examples/blob/hello_world/hello_blob.o 00:04:28.796 CXX test/cpp_headers/crc16.o 00:04:28.796 CXX test/cpp_headers/crc32.o 00:04:29.055 CC test/lvol/esnap/esnap.o 00:04:29.055 LINK mkfs 00:04:29.055 LINK dif 00:04:29.055 CC examples/blob/cli/blobcli.o 00:04:29.055 CXX test/cpp_headers/crc64.o 00:04:29.055 LINK hello_world 00:04:29.055 CC test/nvme/boot_partition/boot_partition.o 00:04:29.055 LINK hello_blob 00:04:29.055 CC test/nvme/compliance/nvme_compliance.o 00:04:29.055 CC examples/nvme/reconnect/reconnect.o 00:04:29.313 CXX test/cpp_headers/dif.o 00:04:29.313 LINK boot_partition 00:04:29.313 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:29.313 CC test/nvme/fused_ordering/fused_ordering.o 00:04:29.313 CC examples/nvme/arbitration/arbitration.o 00:04:29.313 CC examples/nvme/hotplug/hotplug.o 00:04:29.313 CXX test/cpp_headers/dma.o 00:04:29.313 CXX test/cpp_headers/endian.o 00:04:29.572 LINK nvme_compliance 00:04:29.572 LINK fused_ordering 00:04:29.572 LINK reconnect 00:04:29.572 CXX test/cpp_headers/env_dpdk.o 00:04:29.572 LINK blobcli 00:04:29.572 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:29.572 LINK hotplug 00:04:29.572 LINK arbitration 00:04:29.830 CXX test/cpp_headers/env.o 00:04:29.830 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:29.830 LINK doorbell_aers 00:04:29.830 CC test/bdev/bdevio/bdevio.o 00:04:29.830 LINK nvme_manage 00:04:29.830 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:29.830 CC examples/nvme/abort/abort.o 00:04:30.089 CC examples/bdev/hello_world/hello_bdev.o 00:04:30.089 CC test/nvme/fdp/fdp.o 00:04:30.089 CXX test/cpp_headers/event.o 00:04:30.089 CXX test/cpp_headers/fd_group.o 00:04:30.089 LINK cmb_copy 00:04:30.089 CXX test/cpp_headers/fd.o 00:04:30.089 LINK pmr_persistence 00:04:30.089 CXX test/cpp_headers/file.o 00:04:30.347 CXX test/cpp_headers/ftl.o 00:04:30.347 LINK hello_bdev 00:04:30.347 CXX test/cpp_headers/gpt_spec.o 00:04:30.347 CC test/nvme/cuse/cuse.o 00:04:30.347 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.347 LINK bdevio 00:04:30.347 CXX test/cpp_headers/hexlify.o 00:04:30.347 LINK abort 00:04:30.347 LINK fdp 00:04:30.347 CXX test/cpp_headers/histogram_data.o 00:04:30.347 CXX test/cpp_headers/idxd.o 00:04:30.347 CXX test/cpp_headers/idxd_spec.o 00:04:30.606 CXX test/cpp_headers/init.o 00:04:30.606 CXX test/cpp_headers/ioat.o 00:04:30.606 CXX test/cpp_headers/ioat_spec.o 00:04:30.606 CXX test/cpp_headers/iscsi_spec.o 00:04:30.606 CXX test/cpp_headers/json.o 00:04:30.606 CXX test/cpp_headers/jsonrpc.o 00:04:30.606 CXX test/cpp_headers/keyring.o 00:04:30.606 CXX test/cpp_headers/keyring_module.o 00:04:30.606 CXX test/cpp_headers/likely.o 00:04:30.606 CXX test/cpp_headers/log.o 00:04:30.864 CXX test/cpp_headers/lvol.o 00:04:30.864 CXX test/cpp_headers/memory.o 00:04:30.864 CXX test/cpp_headers/mmio.o 00:04:30.864 CXX test/cpp_headers/nbd.o 00:04:30.864 CXX test/cpp_headers/notify.o 00:04:30.864 CXX test/cpp_headers/nvme.o 00:04:30.864 CXX test/cpp_headers/nvme_intel.o 00:04:30.864 CXX test/cpp_headers/nvme_ocssd.o 00:04:30.864 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:30.864 CXX test/cpp_headers/nvme_spec.o 00:04:30.864 CXX test/cpp_headers/nvme_zns.o 00:04:31.123 CXX test/cpp_headers/nvmf_cmd.o 00:04:31.123 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:31.123 CXX test/cpp_headers/nvmf.o 00:04:31.123 CXX test/cpp_headers/nvmf_spec.o 00:04:31.123 CXX test/cpp_headers/nvmf_transport.o 00:04:31.123 CXX test/cpp_headers/opal.o 00:04:31.123 CXX test/cpp_headers/opal_spec.o 00:04:31.123 CXX test/cpp_headers/pci_ids.o 00:04:31.123 LINK bdevperf 00:04:31.381 CXX test/cpp_headers/pipe.o 00:04:31.381 CXX test/cpp_headers/queue.o 00:04:31.381 CXX test/cpp_headers/reduce.o 00:04:31.381 CXX test/cpp_headers/rpc.o 00:04:31.382 CXX test/cpp_headers/scheduler.o 00:04:31.382 CXX test/cpp_headers/scsi.o 00:04:31.382 CXX test/cpp_headers/scsi_spec.o 00:04:31.382 CXX test/cpp_headers/sock.o 00:04:31.382 CXX test/cpp_headers/stdinc.o 00:04:31.382 CXX test/cpp_headers/string.o 00:04:31.382 CXX test/cpp_headers/thread.o 00:04:31.640 CXX test/cpp_headers/trace.o 00:04:31.640 CXX test/cpp_headers/trace_parser.o 00:04:31.640 CXX test/cpp_headers/tree.o 00:04:31.640 CXX test/cpp_headers/ublk.o 00:04:31.640 CXX test/cpp_headers/util.o 00:04:31.640 CXX test/cpp_headers/uuid.o 00:04:31.640 CXX test/cpp_headers/version.o 00:04:31.640 CC examples/nvmf/nvmf/nvmf.o 00:04:31.640 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.640 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.640 CXX test/cpp_headers/vhost.o 00:04:31.640 CXX test/cpp_headers/vmd.o 00:04:31.640 CXX test/cpp_headers/xor.o 00:04:31.983 CXX test/cpp_headers/zipf.o 00:04:31.983 LINK cuse 00:04:31.983 LINK nvmf 00:04:35.322 LINK esnap 00:04:35.580 00:04:35.580 real 1m1.614s 00:04:35.580 user 5m43.930s 00:04:35.580 sys 1m6.787s 00:04:35.580 18:11:21 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:35.580 18:11:21 make -- common/autotest_common.sh@10 -- $ set +x 00:04:35.580 ************************************ 00:04:35.580 END TEST make 00:04:35.580 ************************************ 00:04:35.580 18:11:21 -- common/autotest_common.sh@1142 -- $ return 0 00:04:35.580 18:11:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:35.580 18:11:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:35.580 18:11:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:35.580 18:11:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.580 18:11:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:35.580 18:11:21 -- pm/common@44 -- $ pid=5984 00:04:35.580 18:11:21 -- pm/common@50 -- $ kill -TERM 5984 00:04:35.580 18:11:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.580 18:11:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:35.580 18:11:21 -- pm/common@44 -- $ pid=5985 00:04:35.581 18:11:21 -- pm/common@50 -- $ kill -TERM 5985 00:04:35.581 18:11:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:35.581 18:11:21 -- nvmf/common.sh@7 -- # uname -s 00:04:35.581 18:11:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.581 18:11:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.581 18:11:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.581 18:11:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.581 18:11:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.581 18:11:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.581 18:11:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.581 18:11:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.581 18:11:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.581 18:11:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.581 18:11:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1fa35760-e429-4362-9ca7-dc58b037cb44 00:04:35.581 18:11:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=1fa35760-e429-4362-9ca7-dc58b037cb44 00:04:35.581 18:11:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.581 18:11:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.581 18:11:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.581 18:11:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.581 18:11:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:35.581 18:11:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.581 18:11:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.581 18:11:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.581 18:11:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.581 18:11:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.581 18:11:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.581 18:11:21 -- paths/export.sh@5 -- # export PATH 00:04:35.581 18:11:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.581 18:11:21 -- nvmf/common.sh@47 -- # : 0 00:04:35.581 18:11:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:35.581 18:11:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:35.581 18:11:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.581 18:11:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.581 18:11:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.581 18:11:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:35.581 18:11:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:35.581 18:11:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:35.581 18:11:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:35.581 18:11:21 -- spdk/autotest.sh@32 -- # uname -s 00:04:35.581 18:11:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:35.581 18:11:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:35.581 18:11:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.581 18:11:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:35.581 18:11:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:35.581 18:11:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:35.581 18:11:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:35.581 18:11:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:35.581 18:11:21 -- spdk/autotest.sh@48 -- # udevadm_pid=65827 00:04:35.581 18:11:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:35.581 18:11:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:35.581 18:11:21 -- pm/common@17 -- # local monitor 00:04:35.581 18:11:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.581 18:11:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:35.581 18:11:21 -- pm/common@25 -- # sleep 1 00:04:35.581 18:11:21 -- pm/common@21 -- # date +%s 00:04:35.581 18:11:21 -- pm/common@21 -- # date +%s 00:04:35.837 18:11:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720721481 00:04:35.837 18:11:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720721481 00:04:35.837 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720721481_collect-vmstat.pm.log 00:04:35.837 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720721481_collect-cpu-load.pm.log 00:04:36.769 18:11:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:36.769 18:11:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:36.769 18:11:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.769 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:04:36.769 18:11:23 -- spdk/autotest.sh@59 -- # create_test_list 00:04:36.769 18:11:23 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:36.769 18:11:23 -- common/autotest_common.sh@10 -- # set +x 00:04:36.769 18:11:23 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:36.769 18:11:23 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:36.769 18:11:23 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:36.769 18:11:23 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:36.769 18:11:23 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:36.769 18:11:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:36.769 18:11:23 -- common/autotest_common.sh@1455 -- # uname 00:04:36.769 18:11:23 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:36.769 18:11:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:36.769 18:11:23 -- common/autotest_common.sh@1475 -- # uname 00:04:36.769 18:11:23 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:36.769 18:11:23 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:36.769 18:11:23 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:36.769 18:11:23 -- spdk/autotest.sh@72 -- # hash lcov 00:04:36.769 18:11:23 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:36.769 18:11:23 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:36.769 --rc lcov_branch_coverage=1 00:04:36.769 --rc lcov_function_coverage=1 00:04:36.769 --rc genhtml_branch_coverage=1 00:04:36.769 --rc genhtml_function_coverage=1 00:04:36.769 --rc genhtml_legend=1 00:04:36.769 --rc geninfo_all_blocks=1 00:04:36.769 ' 00:04:36.769 18:11:23 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:36.769 --rc lcov_branch_coverage=1 00:04:36.769 --rc lcov_function_coverage=1 00:04:36.769 --rc genhtml_branch_coverage=1 00:04:36.769 --rc genhtml_function_coverage=1 00:04:36.769 --rc genhtml_legend=1 00:04:36.769 --rc geninfo_all_blocks=1 00:04:36.769 ' 00:04:36.769 18:11:23 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:36.769 --rc lcov_branch_coverage=1 00:04:36.769 --rc lcov_function_coverage=1 00:04:36.769 --rc genhtml_branch_coverage=1 00:04:36.769 --rc genhtml_function_coverage=1 00:04:36.769 --rc genhtml_legend=1 00:04:36.769 --rc geninfo_all_blocks=1 00:04:36.769 --no-external' 00:04:36.769 18:11:23 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:36.769 --rc lcov_branch_coverage=1 00:04:36.769 --rc lcov_function_coverage=1 00:04:36.769 --rc genhtml_branch_coverage=1 00:04:36.769 --rc genhtml_function_coverage=1 00:04:36.769 --rc genhtml_legend=1 00:04:36.769 --rc geninfo_all_blocks=1 00:04:36.769 --no-external' 00:04:36.769 18:11:23 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:36.769 lcov: LCOV version 1.14 00:04:36.769 18:11:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:51.642 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:51.642 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:01.615 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:01.615 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:01.616 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:01.616 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:04.922 18:11:50 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:04.922 18:11:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:04.922 18:11:50 -- common/autotest_common.sh@10 -- # set +x 00:05:04.922 18:11:50 -- spdk/autotest.sh@91 -- # rm -f 00:05:04.922 18:11:50 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.488 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:05.488 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:05.488 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:05.488 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:05.488 18:11:51 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:05.488 18:11:51 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:05.488 18:11:51 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:05.489 18:11:51 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:05.489 18:11:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.489 18:11:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.489 18:11:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.489 18:11:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.489 18:11:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:05.489 18:11:51 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:05.489 18:11:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.489 18:11:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:05.489 18:11:51 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:05.489 18:11:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.489 18:11:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.489 18:11:51 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:05.489 18:11:51 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:05.489 18:11:51 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.489 18:11:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:05.489 18:11:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.489 18:11:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.489 18:11:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:05.489 18:11:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:05.489 18:11:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:05.489 No valid GPT data, bailing 00:05:05.489 18:11:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:05.489 18:11:51 -- scripts/common.sh@391 -- # pt= 00:05:05.489 18:11:51 -- scripts/common.sh@392 -- # return 1 00:05:05.489 18:11:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:05.489 1+0 records in 00:05:05.489 1+0 records out 00:05:05.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120398 s, 87.1 MB/s 00:05:05.489 18:11:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.489 18:11:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.489 18:11:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:05.489 18:11:51 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:05.489 18:11:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:05.489 No valid GPT data, bailing 00:05:05.489 18:11:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:05.489 18:11:51 -- scripts/common.sh@391 -- # pt= 00:05:05.489 18:11:51 -- scripts/common.sh@392 -- # return 1 00:05:05.489 18:11:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:05.489 1+0 records in 00:05:05.489 1+0 records out 00:05:05.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380675 s, 275 MB/s 00:05:05.489 18:11:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.489 18:11:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.489 18:11:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:05.489 18:11:51 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:05.489 18:11:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:05.747 No valid GPT data, bailing 00:05:05.747 18:11:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:05.747 18:11:51 -- scripts/common.sh@391 -- # pt= 00:05:05.747 18:11:51 -- scripts/common.sh@392 -- # return 1 00:05:05.747 18:11:51 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:05.747 1+0 records in 00:05:05.747 1+0 records out 00:05:05.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452172 s, 232 MB/s 00:05:05.747 18:11:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.747 18:11:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.747 18:11:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:05:05.747 18:11:51 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:05:05.747 18:11:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:05.747 No valid GPT data, bailing 00:05:05.747 18:11:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:05.747 18:11:52 -- scripts/common.sh@391 -- # pt= 00:05:05.747 18:11:52 -- scripts/common.sh@392 -- # return 1 00:05:05.747 18:11:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:05.747 1+0 records in 00:05:05.747 1+0 records out 00:05:05.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433059 s, 242 MB/s 00:05:05.747 18:11:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.747 18:11:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.747 18:11:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:05:05.747 18:11:52 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:05:05.747 18:11:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:05.747 No valid GPT data, bailing 00:05:05.747 18:11:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:05.747 18:11:52 -- scripts/common.sh@391 -- # pt= 00:05:05.747 18:11:52 -- scripts/common.sh@392 -- # return 1 00:05:05.747 18:11:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:05.747 1+0 records in 00:05:05.747 1+0 records out 00:05:05.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00452222 s, 232 MB/s 00:05:05.747 18:11:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:05.747 18:11:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:05.747 18:11:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:05.747 18:11:52 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:05.747 18:11:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:06.006 No valid GPT data, bailing 00:05:06.006 18:11:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:06.006 18:11:52 -- scripts/common.sh@391 -- # pt= 00:05:06.006 18:11:52 -- scripts/common.sh@392 -- # return 1 00:05:06.006 18:11:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:06.006 1+0 records in 00:05:06.006 1+0 records out 00:05:06.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435077 s, 241 MB/s 00:05:06.006 18:11:52 -- spdk/autotest.sh@118 -- # sync 00:05:06.006 18:11:52 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:06.006 18:11:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:06.006 18:11:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:07.907 18:11:54 -- spdk/autotest.sh@124 -- # uname -s 00:05:07.907 18:11:54 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:07.907 18:11:54 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:07.907 18:11:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.907 18:11:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.907 18:11:54 -- common/autotest_common.sh@10 -- # set +x 00:05:07.907 ************************************ 00:05:07.907 START TEST setup.sh 00:05:07.907 ************************************ 00:05:07.907 18:11:54 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:07.907 * Looking for test storage... 00:05:07.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:07.907 18:11:54 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:07.907 18:11:54 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:07.907 18:11:54 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:07.907 18:11:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.907 18:11:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.907 18:11:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.907 ************************************ 00:05:07.907 START TEST acl 00:05:07.907 ************************************ 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:07.907 * Looking for test storage... 00:05:07.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:07.907 18:11:54 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:07.907 18:11:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:07.907 18:11:54 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:07.907 18:11:54 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:07.907 18:11:54 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:07.907 18:11:54 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:07.907 18:11:54 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:07.907 18:11:54 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.907 18:11:54 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.285 18:11:55 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:09.285 18:11:55 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:09.285 18:11:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:09.285 18:11:55 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:09.285 18:11:55 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.285 18:11:55 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:09.544 18:11:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:09.544 18:11:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:09.544 18:11:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.111 Hugepages 00:05:10.111 node hugesize free / total 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.111 00:05:10.111 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:10.111 18:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:10.371 18:11:56 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:10.371 18:11:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.371 18:11:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.371 18:11:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:10.371 ************************************ 00:05:10.371 START TEST denied 00:05:10.371 ************************************ 00:05:10.371 18:11:56 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:05:10.371 18:11:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:10.371 18:11:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:10.371 18:11:56 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:10.371 18:11:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.371 18:11:56 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.749 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.749 18:11:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.312 00:05:18.312 real 0m7.134s 00:05:18.312 user 0m0.842s 00:05:18.312 sys 0m1.322s 00:05:18.313 18:12:03 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.313 18:12:03 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:18.313 ************************************ 00:05:18.313 END TEST denied 00:05:18.313 ************************************ 00:05:18.313 18:12:03 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:18.313 18:12:03 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:18.313 18:12:03 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.313 18:12:03 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.313 18:12:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:18.313 ************************************ 00:05:18.313 START TEST allowed 00:05:18.313 ************************************ 00:05:18.313 18:12:03 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:05:18.313 18:12:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:18.313 18:12:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:18.313 18:12:03 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:18.313 18:12:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.313 18:12:03 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.572 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.572 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:18.572 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:18.572 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:18.572 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:18.572 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:18.831 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:18.832 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:18.832 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:18.832 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:18.832 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:18.832 18:12:04 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:18.832 18:12:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.832 18:12:04 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.768 00:05:19.768 real 0m2.109s 00:05:19.768 user 0m0.949s 00:05:19.768 sys 0m1.158s 00:05:19.768 18:12:06 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.768 ************************************ 00:05:19.768 18:12:06 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:19.768 END TEST allowed 00:05:19.768 ************************************ 00:05:19.768 18:12:06 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:05:19.768 ************************************ 00:05:19.768 END TEST acl 00:05:19.768 ************************************ 00:05:19.768 00:05:19.768 real 0m11.885s 00:05:19.768 user 0m3.051s 00:05:19.768 sys 0m3.867s 00:05:19.768 18:12:06 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.768 18:12:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:19.768 18:12:06 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:19.768 18:12:06 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:19.768 18:12:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.768 18:12:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.768 18:12:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:19.768 ************************************ 00:05:19.768 START TEST hugepages 00:05:19.768 ************************************ 00:05:19.768 18:12:06 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:20.027 * Looking for test storage... 00:05:20.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4715232 kB' 'MemAvailable: 7404956 kB' 'Buffers: 2436 kB' 'Cached: 2893740 kB' 'SwapCached: 0 kB' 'Active: 444472 kB' 'Inactive: 2553620 kB' 'Active(anon): 112432 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553620 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 103536 kB' 'Mapped: 48620 kB' 'Shmem: 10516 kB' 'KReclaimable: 82008 kB' 'Slab: 160880 kB' 'SReclaimable: 82008 kB' 'SUnreclaim: 78872 kB' 'KernelStack: 6492 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 326692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.027 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:20.028 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:20.029 18:12:06 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:20.029 18:12:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.029 18:12:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.029 18:12:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.029 ************************************ 00:05:20.029 START TEST default_setup 00:05:20.029 ************************************ 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.029 18:12:06 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.161 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.161 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.161 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.161 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6825272 kB' 'MemAvailable: 9514804 kB' 'Buffers: 2436 kB' 'Cached: 2893720 kB' 'SwapCached: 0 kB' 'Active: 461892 kB' 'Inactive: 2553624 kB' 'Active(anon): 129852 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121028 kB' 'Mapped: 48788 kB' 'Shmem: 10476 kB' 'KReclaimable: 81612 kB' 'Slab: 160248 kB' 'SReclaimable: 81612 kB' 'SUnreclaim: 78636 kB' 'KernelStack: 6560 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.161 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:21.162 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6824768 kB' 'MemAvailable: 9514320 kB' 'Buffers: 2436 kB' 'Cached: 2893724 kB' 'SwapCached: 0 kB' 'Active: 461812 kB' 'Inactive: 2553644 kB' 'Active(anon): 129772 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120900 kB' 'Mapped: 48700 kB' 'Shmem: 10476 kB' 'KReclaimable: 81612 kB' 'Slab: 160208 kB' 'SReclaimable: 81612 kB' 'SUnreclaim: 78596 kB' 'KernelStack: 6480 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.163 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.164 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6824516 kB' 'MemAvailable: 9514068 kB' 'Buffers: 2436 kB' 'Cached: 2893724 kB' 'SwapCached: 0 kB' 'Active: 461880 kB' 'Inactive: 2553644 kB' 'Active(anon): 129840 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121012 kB' 'Mapped: 48572 kB' 'Shmem: 10476 kB' 'KReclaimable: 81612 kB' 'Slab: 160208 kB' 'SReclaimable: 81612 kB' 'SUnreclaim: 78596 kB' 'KernelStack: 6512 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.165 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.166 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.167 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.427 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:21.428 nr_hugepages=1024 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.428 resv_hugepages=0 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.428 surplus_hugepages=0 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.428 anon_hugepages=0 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6824516 kB' 'MemAvailable: 9514068 kB' 'Buffers: 2436 kB' 'Cached: 2893724 kB' 'SwapCached: 0 kB' 'Active: 461624 kB' 'Inactive: 2553644 kB' 'Active(anon): 129584 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120736 kB' 'Mapped: 48572 kB' 'Shmem: 10476 kB' 'KReclaimable: 81612 kB' 'Slab: 160208 kB' 'SReclaimable: 81612 kB' 'SUnreclaim: 78596 kB' 'KernelStack: 6496 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.428 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.429 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6824516 kB' 'MemUsed: 5417460 kB' 'SwapCached: 0 kB' 'Active: 461616 kB' 'Inactive: 2553644 kB' 'Active(anon): 129576 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 2896160 kB' 'Mapped: 48572 kB' 'AnonPages: 120720 kB' 'Shmem: 10476 kB' 'KernelStack: 6496 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81612 kB' 'Slab: 160208 kB' 'SReclaimable: 81612 kB' 'SUnreclaim: 78596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.430 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:21.431 node0=1024 expecting 1024 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.431 00:05:21.431 real 0m1.365s 00:05:21.431 user 0m0.595s 00:05:21.431 sys 0m0.731s 00:05:21.431 ************************************ 00:05:21.431 END TEST default_setup 00:05:21.431 ************************************ 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.431 18:12:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:21.431 18:12:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:21.431 18:12:07 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:21.431 18:12:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.431 18:12:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.431 18:12:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.431 ************************************ 00:05:21.431 START TEST per_node_1G_alloc 00:05:21.431 ************************************ 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.431 18:12:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.955 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.955 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.955 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.955 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.955 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7869232 kB' 'MemAvailable: 10558768 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462848 kB' 'Inactive: 2553648 kB' 'Active(anon): 130808 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121480 kB' 'Mapped: 48988 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160208 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78632 kB' 'KernelStack: 6556 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7869232 kB' 'MemAvailable: 10558768 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 461992 kB' 'Inactive: 2553648 kB' 'Active(anon): 129952 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121136 kB' 'Mapped: 48884 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160280 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78704 kB' 'KernelStack: 6540 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.956 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.957 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7869232 kB' 'MemAvailable: 10558768 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462172 kB' 'Inactive: 2553648 kB' 'Active(anon): 130132 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121312 kB' 'Mapped: 48884 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160276 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78700 kB' 'KernelStack: 6540 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.958 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.959 nr_hugepages=512 00:05:21.959 resv_hugepages=0 00:05:21.959 surplus_hugepages=0 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.959 anon_hugepages=0 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7869232 kB' 'MemAvailable: 10558768 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462148 kB' 'Inactive: 2553648 kB' 'Active(anon): 130108 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121284 kB' 'Mapped: 48884 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160272 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78696 kB' 'KernelStack: 6540 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.959 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7869232 kB' 'MemUsed: 4372744 kB' 'SwapCached: 0 kB' 'Active: 462016 kB' 'Inactive: 2553648 kB' 'Active(anon): 129976 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 2896164 kB' 'Mapped: 48884 kB' 'AnonPages: 121192 kB' 'Shmem: 10476 kB' 'KernelStack: 6556 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81576 kB' 'Slab: 160272 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.960 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.220 node0=512 expecting 512 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:22.220 00:05:22.220 real 0m0.685s 00:05:22.220 user 0m0.320s 00:05:22.220 sys 0m0.393s 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.220 18:12:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:22.220 ************************************ 00:05:22.220 END TEST per_node_1G_alloc 00:05:22.220 ************************************ 00:05:22.220 18:12:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:22.220 18:12:08 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:22.220 18:12:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.220 18:12:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.220 18:12:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.220 ************************************ 00:05:22.220 START TEST even_2G_alloc 00:05:22.220 ************************************ 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.220 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.742 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.742 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.742 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.742 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818692 kB' 'MemAvailable: 9508224 kB' 'Buffers: 2436 kB' 'Cached: 2893724 kB' 'SwapCached: 0 kB' 'Active: 462108 kB' 'Inactive: 2553644 kB' 'Active(anon): 130068 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121176 kB' 'Mapped: 48716 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160364 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78788 kB' 'KernelStack: 6484 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.742 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:22.743 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818440 kB' 'MemAvailable: 9507972 kB' 'Buffers: 2436 kB' 'Cached: 2893724 kB' 'SwapCached: 0 kB' 'Active: 461948 kB' 'Inactive: 2553644 kB' 'Active(anon): 129908 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121096 kB' 'Mapped: 48720 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160420 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78844 kB' 'KernelStack: 6496 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.744 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.745 18:12:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6819096 kB' 'MemAvailable: 9508628 kB' 'Buffers: 2436 kB' 'Cached: 2893724 kB' 'SwapCached: 0 kB' 'Active: 462064 kB' 'Inactive: 2553644 kB' 'Active(anon): 130024 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 121180 kB' 'Mapped: 48572 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160408 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78832 kB' 'KernelStack: 6496 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.745 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.746 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:22.747 nr_hugepages=1024 00:05:22.747 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.748 resv_hugepages=0 00:05:22.748 surplus_hugepages=0 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.748 anon_hugepages=0 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6819096 kB' 'MemAvailable: 9508628 kB' 'Buffers: 2436 kB' 'Cached: 2893724 kB' 'SwapCached: 0 kB' 'Active: 461604 kB' 'Inactive: 2553644 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 120724 kB' 'Mapped: 48572 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160408 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78832 kB' 'KernelStack: 6480 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.748 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.749 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6819096 kB' 'MemUsed: 5422880 kB' 'SwapCached: 0 kB' 'Active: 461604 kB' 'Inactive: 2553644 kB' 'Active(anon): 129564 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'FilePages: 2896160 kB' 'Mapped: 48572 kB' 'AnonPages: 120724 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81576 kB' 'Slab: 160408 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.750 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.751 node0=1024 expecting 1024 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.751 00:05:22.751 real 0m0.654s 00:05:22.751 user 0m0.323s 00:05:22.751 sys 0m0.378s 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.751 18:12:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:22.751 ************************************ 00:05:22.751 END TEST even_2G_alloc 00:05:22.751 ************************************ 00:05:22.751 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:22.751 18:12:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:22.751 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.751 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.751 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.751 ************************************ 00:05:22.751 START TEST odd_alloc 00:05:22.751 ************************************ 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.751 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.322 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.322 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.322 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.322 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814228 kB' 'MemAvailable: 9503764 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462752 kB' 'Inactive: 2553648 kB' 'Active(anon): 130712 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 121772 kB' 'Mapped: 48688 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160444 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78868 kB' 'KernelStack: 6548 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.322 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.323 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814228 kB' 'MemAvailable: 9503764 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462144 kB' 'Inactive: 2553648 kB' 'Active(anon): 130104 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121164 kB' 'Mapped: 48636 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160436 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78860 kB' 'KernelStack: 6528 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.324 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.325 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814228 kB' 'MemAvailable: 9503764 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 461980 kB' 'Inactive: 2553648 kB' 'Active(anon): 129940 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121092 kB' 'Mapped: 48576 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160432 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78856 kB' 'KernelStack: 6528 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.326 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:23.327 nr_hugepages=1025 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:23.327 resv_hugepages=0 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.327 surplus_hugepages=0 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.327 anon_hugepages=0 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.327 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814228 kB' 'MemAvailable: 9503764 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 461672 kB' 'Inactive: 2553648 kB' 'Active(anon): 129632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120744 kB' 'Mapped: 48576 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160428 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78852 kB' 'KernelStack: 6496 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.328 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.329 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.589 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814228 kB' 'MemUsed: 5427748 kB' 'SwapCached: 0 kB' 'Active: 461892 kB' 'Inactive: 2553648 kB' 'Active(anon): 129852 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 2896164 kB' 'Mapped: 48576 kB' 'AnonPages: 120964 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81576 kB' 'Slab: 160428 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.590 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.591 node0=1025 expecting 1025 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:23.591 00:05:23.591 real 0m0.637s 00:05:23.591 user 0m0.325s 00:05:23.591 sys 0m0.350s 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.591 18:12:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:23.591 ************************************ 00:05:23.591 END TEST odd_alloc 00:05:23.591 ************************************ 00:05:23.591 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:23.591 18:12:09 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:23.591 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.591 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.591 18:12:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:23.591 ************************************ 00:05:23.591 START TEST custom_alloc 00:05:23.591 ************************************ 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.591 18:12:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.112 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.112 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.112 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.112 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.112 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7855496 kB' 'MemAvailable: 10545032 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462312 kB' 'Inactive: 2553648 kB' 'Active(anon): 130272 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121628 kB' 'Mapped: 48696 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160428 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78852 kB' 'KernelStack: 6512 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.113 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7855496 kB' 'MemAvailable: 10545032 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462164 kB' 'Inactive: 2553648 kB' 'Active(anon): 130124 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121184 kB' 'Mapped: 48696 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160428 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78852 kB' 'KernelStack: 6496 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.114 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.115 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7855496 kB' 'MemAvailable: 10545032 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 462004 kB' 'Inactive: 2553648 kB' 'Active(anon): 129964 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121068 kB' 'Mapped: 48576 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160428 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78852 kB' 'KernelStack: 6512 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.116 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.117 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.118 nr_hugepages=512 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:24.118 resv_hugepages=0 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.118 surplus_hugepages=0 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.118 anon_hugepages=0 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7855496 kB' 'MemAvailable: 10545032 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 461976 kB' 'Inactive: 2553648 kB' 'Active(anon): 129936 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121040 kB' 'Mapped: 48576 kB' 'Shmem: 10476 kB' 'KReclaimable: 81576 kB' 'Slab: 160432 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78856 kB' 'KernelStack: 6496 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.118 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.119 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7855496 kB' 'MemUsed: 4386480 kB' 'SwapCached: 0 kB' 'Active: 461692 kB' 'Inactive: 2553648 kB' 'Active(anon): 129652 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 2896164 kB' 'Mapped: 48576 kB' 'AnonPages: 120788 kB' 'Shmem: 10476 kB' 'KernelStack: 6512 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81576 kB' 'Slab: 160432 kB' 'SReclaimable: 81576 kB' 'SUnreclaim: 78856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.120 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.121 node0=512 expecting 512 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:24.121 00:05:24.121 real 0m0.651s 00:05:24.121 user 0m0.321s 00:05:24.121 sys 0m0.372s 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.121 18:12:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:24.121 ************************************ 00:05:24.121 END TEST custom_alloc 00:05:24.121 ************************************ 00:05:24.121 18:12:10 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:24.121 18:12:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:24.121 18:12:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.121 18:12:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.121 18:12:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:24.121 ************************************ 00:05:24.121 START TEST no_shrink_alloc 00:05:24.121 ************************************ 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.121 18:12:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.692 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.692 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.692 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.692 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818104 kB' 'MemAvailable: 9507644 kB' 'Buffers: 2436 kB' 'Cached: 2893732 kB' 'SwapCached: 0 kB' 'Active: 460056 kB' 'Inactive: 2553652 kB' 'Active(anon): 128016 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553652 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 119184 kB' 'Mapped: 48064 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160220 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78648 kB' 'KernelStack: 6544 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.692 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.693 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818356 kB' 'MemAvailable: 9507892 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 459420 kB' 'Inactive: 2553648 kB' 'Active(anon): 127380 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118496 kB' 'Mapped: 47832 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160216 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78644 kB' 'KernelStack: 6496 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.694 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.695 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818356 kB' 'MemAvailable: 9507892 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 459412 kB' 'Inactive: 2553648 kB' 'Active(anon): 127372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118540 kB' 'Mapped: 47832 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160212 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78640 kB' 'KernelStack: 6448 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.696 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.958 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.959 nr_hugepages=1024 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.959 resv_hugepages=0 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.959 surplus_hugepages=0 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.959 anon_hugepages=0 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.959 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818356 kB' 'MemAvailable: 9507892 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 459104 kB' 'Inactive: 2553648 kB' 'Active(anon): 127064 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 118192 kB' 'Mapped: 47836 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160212 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78640 kB' 'KernelStack: 6448 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.960 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6818356 kB' 'MemUsed: 5423620 kB' 'SwapCached: 0 kB' 'Active: 459352 kB' 'Inactive: 2553648 kB' 'Active(anon): 127312 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 2896164 kB' 'Mapped: 47836 kB' 'AnonPages: 118436 kB' 'Shmem: 10476 kB' 'KernelStack: 6448 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81572 kB' 'Slab: 160212 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.961 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.962 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.963 node0=1024 expecting 1024 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.963 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.486 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.486 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.486 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.486 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.486 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6812260 kB' 'MemAvailable: 9501796 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 460320 kB' 'Inactive: 2553648 kB' 'Active(anon): 128280 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 119396 kB' 'Mapped: 47964 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160200 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78628 kB' 'KernelStack: 6500 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.486 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.487 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6812688 kB' 'MemAvailable: 9502224 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 459580 kB' 'Inactive: 2553648 kB' 'Active(anon): 127540 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118644 kB' 'Mapped: 47948 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160200 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78628 kB' 'KernelStack: 6464 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.488 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.489 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6812688 kB' 'MemAvailable: 9502224 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 459420 kB' 'Inactive: 2553648 kB' 'Active(anon): 127380 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118516 kB' 'Mapped: 47844 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160200 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78628 kB' 'KernelStack: 6448 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.490 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:25.491 nr_hugepages=1024 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.491 resv_hugepages=0 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.491 surplus_hugepages=0 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.491 anon_hugepages=0 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.491 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6812688 kB' 'MemAvailable: 9502224 kB' 'Buffers: 2436 kB' 'Cached: 2893728 kB' 'SwapCached: 0 kB' 'Active: 459384 kB' 'Inactive: 2553648 kB' 'Active(anon): 127344 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118456 kB' 'Mapped: 47836 kB' 'Shmem: 10476 kB' 'KReclaimable: 81572 kB' 'Slab: 160200 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78628 kB' 'KernelStack: 6448 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.492 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6812688 kB' 'MemUsed: 5429288 kB' 'SwapCached: 0 kB' 'Active: 459312 kB' 'Inactive: 2553648 kB' 'Active(anon): 127272 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2896164 kB' 'Mapped: 47836 kB' 'AnonPages: 118416 kB' 'Shmem: 10476 kB' 'KernelStack: 6432 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81572 kB' 'Slab: 160200 kB' 'SReclaimable: 81572 kB' 'SUnreclaim: 78628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.493 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.494 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.495 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.495 node0=1024 expecting 1024 00:05:25.495 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.495 18:12:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.495 00:05:25.495 real 0m1.367s 00:05:25.495 user 0m0.663s 00:05:25.495 sys 0m0.794s 00:05:25.495 18:12:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.495 18:12:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:25.495 ************************************ 00:05:25.495 END TEST no_shrink_alloc 00:05:25.495 ************************************ 00:05:25.754 18:12:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:25.754 18:12:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:25.754 00:05:25.754 real 0m5.791s 00:05:25.754 user 0m2.714s 00:05:25.754 sys 0m3.268s 00:05:25.754 18:12:11 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.754 18:12:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.754 ************************************ 00:05:25.754 END TEST hugepages 00:05:25.754 ************************************ 00:05:25.754 18:12:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:25.754 18:12:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:25.754 18:12:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.754 18:12:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.754 18:12:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.754 ************************************ 00:05:25.754 START TEST driver 00:05:25.754 ************************************ 00:05:25.754 18:12:11 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:25.754 * Looking for test storage... 00:05:25.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:25.754 18:12:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:25.754 18:12:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:25.754 18:12:12 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.384 18:12:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:32.384 18:12:17 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.384 18:12:17 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.384 18:12:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:32.384 ************************************ 00:05:32.384 START TEST guess_driver 00:05:32.384 ************************************ 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:32.384 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:32.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:32.385 Looking for driver=uio_pci_generic 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.385 18:12:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.385 18:12:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:32.385 18:12:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:32.385 18:12:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.643 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.643 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.643 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.643 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.643 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.643 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.902 18:12:19 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.466 00:05:39.466 real 0m7.133s 00:05:39.466 user 0m0.746s 00:05:39.466 sys 0m1.469s 00:05:39.466 18:12:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.466 ************************************ 00:05:39.466 END TEST guess_driver 00:05:39.466 ************************************ 00:05:39.466 18:12:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:39.466 18:12:25 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:39.466 00:05:39.466 real 0m13.173s 00:05:39.466 user 0m1.085s 00:05:39.466 sys 0m2.276s 00:05:39.466 18:12:25 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.466 18:12:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:39.466 ************************************ 00:05:39.466 END TEST driver 00:05:39.466 ************************************ 00:05:39.466 18:12:25 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:39.466 18:12:25 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:39.466 18:12:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.466 18:12:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.466 18:12:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:39.466 ************************************ 00:05:39.466 START TEST devices 00:05:39.466 ************************************ 00:05:39.466 18:12:25 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:39.466 * Looking for test storage... 00:05:39.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.466 18:12:25 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:39.466 18:12:25 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:39.466 18:12:25 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.466 18:12:25 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.034 18:12:26 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:40.034 18:12:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:40.034 18:12:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:40.034 18:12:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:40.034 18:12:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:40.035 18:12:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:40.035 18:12:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:40.035 18:12:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:40.035 No valid GPT data, bailing 00:05:40.035 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:40.035 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.035 18:12:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:40.035 18:12:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:40.035 18:12:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:40.035 18:12:26 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:40.035 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:40.035 18:12:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:40.035 18:12:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:40.294 No valid GPT data, bailing 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:40.294 No valid GPT data, bailing 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:40.294 No valid GPT data, bailing 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:40.294 18:12:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:40.294 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:40.294 18:12:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:40.554 No valid GPT data, bailing 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:40.554 18:12:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:40.554 18:12:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:40.554 18:12:26 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:40.554 No valid GPT data, bailing 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:40.554 18:12:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:40.554 18:12:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:40.554 18:12:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:40.554 18:12:26 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:40.554 18:12:26 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:40.554 18:12:26 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.554 18:12:26 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.554 18:12:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:40.554 ************************************ 00:05:40.554 START TEST nvme_mount 00:05:40.554 ************************************ 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:40.554 18:12:26 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:41.490 Creating new GPT entries in memory. 00:05:41.490 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:41.490 other utilities. 00:05:41.490 18:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:41.490 18:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.490 18:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.490 18:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.490 18:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:42.866 Creating new GPT entries in memory. 00:05:42.866 The operation has completed successfully. 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71473 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.866 18:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.866 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.866 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:42.866 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:42.866 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.866 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:42.866 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.125 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.126 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.126 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.126 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.126 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.126 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.384 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:43.384 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:43.643 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:43.643 18:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.902 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.902 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.902 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.902 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.902 18:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.161 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.420 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.420 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.420 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.420 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.679 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.679 18:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.938 18:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.197 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.456 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.456 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.456 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.456 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.715 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.715 18:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.715 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.715 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:45.715 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:45.715 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:45.715 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.974 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.974 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.974 18:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.974 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.974 00:05:45.974 real 0m5.310s 00:05:45.974 user 0m1.411s 00:05:45.974 sys 0m1.569s 00:05:45.974 18:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.974 18:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 ************************************ 00:05:45.974 END TEST nvme_mount 00:05:45.974 ************************************ 00:05:45.974 18:12:32 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:45.974 18:12:32 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:45.974 18:12:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.974 18:12:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.974 18:12:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 ************************************ 00:05:45.974 START TEST dm_mount 00:05:45.974 ************************************ 00:05:45.974 18:12:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:45.974 18:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:45.974 18:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:45.975 18:12:32 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:46.920 Creating new GPT entries in memory. 00:05:46.920 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:46.920 other utilities. 00:05:46.920 18:12:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:46.920 18:12:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.920 18:12:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.920 18:12:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.920 18:12:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:47.856 Creating new GPT entries in memory. 00:05:47.856 The operation has completed successfully. 00:05:47.856 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:47.856 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.856 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.856 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.856 18:12:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:49.233 The operation has completed successfully. 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72098 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.233 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.491 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.491 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.491 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.491 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.491 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.491 18:12:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.835 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:49.835 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.118 18:12:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.376 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.634 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.634 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.634 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.634 18:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:50.892 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:51.150 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.150 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:51.150 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:51.150 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.150 18:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:51.150 00:05:51.150 real 0m5.150s 00:05:51.150 user 0m1.025s 00:05:51.150 sys 0m1.045s 00:05:51.150 18:12:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.150 18:12:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:51.150 ************************************ 00:05:51.150 END TEST dm_mount 00:05:51.150 ************************************ 00:05:51.150 18:12:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:51.150 18:12:37 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:51.150 18:12:37 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:51.150 18:12:37 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.150 18:12:37 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.150 18:12:37 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:51.150 18:12:37 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.150 18:12:37 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:51.408 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:51.408 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:51.408 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:51.408 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:51.408 18:12:37 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:51.408 18:12:37 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.408 18:12:37 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.408 18:12:37 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.408 18:12:37 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.408 18:12:37 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.408 18:12:37 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:51.408 00:05:51.408 real 0m12.472s 00:05:51.408 user 0m3.317s 00:05:51.408 sys 0m3.440s 00:05:51.408 18:12:37 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.408 18:12:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:51.408 ************************************ 00:05:51.408 END TEST devices 00:05:51.408 ************************************ 00:05:51.408 18:12:37 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:51.408 00:05:51.408 real 0m43.610s 00:05:51.408 user 0m10.269s 00:05:51.408 sys 0m13.024s 00:05:51.408 18:12:37 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.408 18:12:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:51.408 ************************************ 00:05:51.408 END TEST setup.sh 00:05:51.408 ************************************ 00:05:51.408 18:12:37 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.408 18:12:37 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:51.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:52.542 Hugepages 00:05:52.542 node hugesize free / total 00:05:52.542 node0 1048576kB 0 / 0 00:05:52.542 node0 2048kB 2048 / 2048 00:05:52.542 00:05:52.542 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:52.542 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:52.542 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:52.800 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:52.800 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:52.800 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:52.800 18:12:39 -- spdk/autotest.sh@130 -- # uname -s 00:05:52.800 18:12:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:52.800 18:12:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:52.800 18:12:39 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:53.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.933 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.933 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.933 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.933 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.933 18:12:40 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:55.310 18:12:41 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:55.310 18:12:41 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:55.310 18:12:41 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:55.310 18:12:41 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:55.310 18:12:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:55.310 18:12:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:55.310 18:12:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:55.310 18:12:41 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:55.310 18:12:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:55.310 18:12:41 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:55.310 18:12:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:55.310 18:12:41 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.568 Waiting for block devices as requested 00:05:55.568 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.826 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.826 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.826 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:01.104 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:01.104 18:12:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:01.104 18:12:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:01.104 18:12:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:01.104 18:12:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:06:01.104 18:12:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:06:01.104 18:12:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:01.104 18:12:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:01.104 18:12:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:01.104 18:12:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1557 -- # continue 00:06:01.104 18:12:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:01.104 18:12:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:01.104 18:12:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:01.104 18:12:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:01.104 18:12:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1557 -- # continue 00:06:01.104 18:12:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:01.104 18:12:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:06:01.104 18:12:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:01.104 18:12:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:01.105 18:12:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:01.105 18:12:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:01.105 18:12:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:01.105 18:12:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:01.105 18:12:47 -- common/autotest_common.sh@1557 -- # continue 00:06:01.105 18:12:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:01.105 18:12:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:01.105 18:12:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:01.105 18:12:47 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:06:01.105 18:12:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:01.105 18:12:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:01.105 18:12:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:01.105 18:12:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:06:01.105 18:12:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:06:01.105 18:12:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:06:01.105 18:12:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:06:01.105 18:12:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:01.105 18:12:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:01.105 18:12:47 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:06:01.105 18:12:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:01.105 18:12:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:01.105 18:12:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:01.105 18:12:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:01.105 18:12:47 -- common/autotest_common.sh@1557 -- # continue 00:06:01.105 18:12:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:01.105 18:12:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.105 18:12:47 -- common/autotest_common.sh@10 -- # set +x 00:06:01.105 18:12:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:01.105 18:12:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.105 18:12:47 -- common/autotest_common.sh@10 -- # set +x 00:06:01.105 18:12:47 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.239 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.239 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.498 18:12:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:02.498 18:12:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.498 18:12:48 -- common/autotest_common.sh@10 -- # set +x 00:06:02.498 18:12:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:02.498 18:12:48 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:02.498 18:12:48 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:02.498 18:12:48 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:02.498 18:12:48 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:02.498 18:12:48 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:02.498 18:12:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:02.498 18:12:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:02.498 18:12:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:02.498 18:12:48 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:02.498 18:12:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:02.498 18:12:48 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:06:02.498 18:12:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:02.498 18:12:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:02.498 18:12:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:02.498 18:12:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:02.498 18:12:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:02.498 18:12:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:02.498 18:12:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:02.498 18:12:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:02.498 18:12:48 -- common/autotest_common.sh@1580 -- # device=0x0010 00:06:02.498 18:12:48 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:02.498 18:12:48 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:02.498 18:12:48 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:02.498 18:12:48 -- common/autotest_common.sh@1593 -- # return 0 00:06:02.498 18:12:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:02.498 18:12:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:02.498 18:12:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:02.498 18:12:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:02.498 18:12:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:02.498 18:12:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.498 18:12:48 -- common/autotest_common.sh@10 -- # set +x 00:06:02.498 18:12:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:02.498 18:12:48 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:02.498 18:12:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.498 18:12:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.498 18:12:48 -- common/autotest_common.sh@10 -- # set +x 00:06:02.498 ************************************ 00:06:02.499 START TEST env 00:06:02.499 ************************************ 00:06:02.499 18:12:48 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:02.757 * Looking for test storage... 00:06:02.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:02.757 18:12:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:02.757 18:12:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.757 18:12:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.757 18:12:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.757 ************************************ 00:06:02.757 START TEST env_memory 00:06:02.757 ************************************ 00:06:02.758 18:12:48 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:02.758 00:06:02.758 00:06:02.758 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.758 http://cunit.sourceforge.net/ 00:06:02.758 00:06:02.758 00:06:02.758 Suite: memory 00:06:02.758 Test: alloc and free memory map ...[2024-07-11 18:12:49.033742] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:02.758 passed 00:06:02.758 Test: mem map translation ...[2024-07-11 18:12:49.094829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:02.758 [2024-07-11 18:12:49.094902] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:02.758 [2024-07-11 18:12:49.095001] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:02.758 [2024-07-11 18:12:49.095047] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:03.016 passed 00:06:03.016 Test: mem map registration ...[2024-07-11 18:12:49.193899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:03.016 [2024-07-11 18:12:49.194035] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:03.016 passed 00:06:03.016 Test: mem map adjacent registrations ...passed 00:06:03.016 00:06:03.016 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.016 suites 1 1 n/a 0 0 00:06:03.016 tests 4 4 4 0 0 00:06:03.016 asserts 152 152 152 0 n/a 00:06:03.016 00:06:03.016 Elapsed time = 0.344 seconds 00:06:03.016 ************************************ 00:06:03.016 END TEST env_memory 00:06:03.016 ************************************ 00:06:03.016 00:06:03.016 real 0m0.382s 00:06:03.016 user 0m0.349s 00:06:03.016 sys 0m0.028s 00:06:03.016 18:12:49 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.016 18:12:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:03.016 18:12:49 env -- common/autotest_common.sh@1142 -- # return 0 00:06:03.016 18:12:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:03.016 18:12:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.016 18:12:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.016 18:12:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.016 ************************************ 00:06:03.016 START TEST env_vtophys 00:06:03.016 ************************************ 00:06:03.016 18:12:49 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:03.016 EAL: lib.eal log level changed from notice to debug 00:06:03.016 EAL: Detected lcore 0 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 1 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 2 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 3 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 4 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 5 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 6 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 7 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 8 as core 0 on socket 0 00:06:03.275 EAL: Detected lcore 9 as core 0 on socket 0 00:06:03.275 EAL: Maximum logical cores by configuration: 128 00:06:03.275 EAL: Detected CPU lcores: 10 00:06:03.275 EAL: Detected NUMA nodes: 1 00:06:03.275 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:03.275 EAL: Detected shared linkage of DPDK 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:03.275 EAL: Registered [vdev] bus. 00:06:03.275 EAL: bus.vdev log level changed from disabled to notice 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:03.275 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:03.275 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:03.275 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:03.275 EAL: No shared files mode enabled, IPC will be disabled 00:06:03.275 EAL: No shared files mode enabled, IPC is disabled 00:06:03.275 EAL: Selected IOVA mode 'PA' 00:06:03.275 EAL: Probing VFIO support... 00:06:03.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:03.275 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:03.275 EAL: Ask a virtual area of 0x2e000 bytes 00:06:03.275 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:03.275 EAL: Setting up physically contiguous memory... 00:06:03.275 EAL: Setting maximum number of open files to 524288 00:06:03.275 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:03.275 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:03.275 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.275 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:03.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.275 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.275 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:03.275 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:03.275 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.275 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:03.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.275 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.275 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:03.275 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:03.275 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.275 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:03.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.275 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.275 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:03.275 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:03.275 EAL: Ask a virtual area of 0x61000 bytes 00:06:03.275 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:03.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:03.275 EAL: Ask a virtual area of 0x400000000 bytes 00:06:03.275 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:03.275 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:03.275 EAL: Hugepages will be freed exactly as allocated. 00:06:03.275 EAL: No shared files mode enabled, IPC is disabled 00:06:03.275 EAL: No shared files mode enabled, IPC is disabled 00:06:03.275 EAL: TSC frequency is ~2200000 KHz 00:06:03.275 EAL: Main lcore 0 is ready (tid=7f80f0e31a40;cpuset=[0]) 00:06:03.275 EAL: Trying to obtain current memory policy. 00:06:03.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.275 EAL: Restoring previous memory policy: 0 00:06:03.275 EAL: request: mp_malloc_sync 00:06:03.275 EAL: No shared files mode enabled, IPC is disabled 00:06:03.275 EAL: Heap on socket 0 was expanded by 2MB 00:06:03.275 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:03.275 EAL: No shared files mode enabled, IPC is disabled 00:06:03.275 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:03.275 EAL: Mem event callback 'spdk:(nil)' registered 00:06:03.275 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:03.275 00:06:03.275 00:06:03.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.275 http://cunit.sourceforge.net/ 00:06:03.275 00:06:03.275 00:06:03.275 Suite: components_suite 00:06:03.842 Test: vtophys_malloc_test ...passed 00:06:03.842 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 4MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 4MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 6MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 6MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 10MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 10MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 18MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 18MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 34MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 34MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 66MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 66MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 130MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 130MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 258MB 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was shrunk by 258MB 00:06:03.842 EAL: Trying to obtain current memory policy. 00:06:03.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.842 EAL: Restoring previous memory policy: 4 00:06:03.842 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.842 EAL: request: mp_malloc_sync 00:06:03.842 EAL: No shared files mode enabled, IPC is disabled 00:06:03.842 EAL: Heap on socket 0 was expanded by 514MB 00:06:04.100 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.100 EAL: request: mp_malloc_sync 00:06:04.100 EAL: No shared files mode enabled, IPC is disabled 00:06:04.100 EAL: Heap on socket 0 was shrunk by 514MB 00:06:04.100 EAL: Trying to obtain current memory policy. 00:06:04.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.100 EAL: Restoring previous memory policy: 4 00:06:04.100 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.100 EAL: request: mp_malloc_sync 00:06:04.100 EAL: No shared files mode enabled, IPC is disabled 00:06:04.100 EAL: Heap on socket 0 was expanded by 1026MB 00:06:04.359 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.359 passed 00:06:04.359 00:06:04.359 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.359 suites 1 1 n/a 0 0 00:06:04.359 tests 2 2 2 0 0 00:06:04.359 asserts 5218 5218 5218 0 n/a 00:06:04.359 00:06:04.359 Elapsed time = 1.085 seconds 00:06:04.359 EAL: request: mp_malloc_sync 00:06:04.359 EAL: No shared files mode enabled, IPC is disabled 00:06:04.359 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:04.359 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.359 EAL: request: mp_malloc_sync 00:06:04.359 EAL: No shared files mode enabled, IPC is disabled 00:06:04.359 EAL: Heap on socket 0 was shrunk by 2MB 00:06:04.359 EAL: No shared files mode enabled, IPC is disabled 00:06:04.359 EAL: No shared files mode enabled, IPC is disabled 00:06:04.359 EAL: No shared files mode enabled, IPC is disabled 00:06:04.359 00:06:04.359 real 0m1.326s 00:06:04.359 user 0m0.591s 00:06:04.359 sys 0m0.599s 00:06:04.359 18:12:50 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.359 ************************************ 00:06:04.359 END TEST env_vtophys 00:06:04.359 ************************************ 00:06:04.359 18:12:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:04.359 18:12:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:04.359 18:12:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:04.360 18:12:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.360 18:12:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.360 18:12:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.618 ************************************ 00:06:04.618 START TEST env_pci 00:06:04.618 ************************************ 00:06:04.618 18:12:50 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:04.618 00:06:04.618 00:06:04.618 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.618 http://cunit.sourceforge.net/ 00:06:04.618 00:06:04.618 00:06:04.618 Suite: pci 00:06:04.618 Test: pci_hook ...[2024-07-11 18:12:50.792612] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73867 has claimed it 00:06:04.618 passed 00:06:04.618 00:06:04.618 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.618 suites 1 1 n/a 0 0 00:06:04.618 tests 1 1 1 0 0 00:06:04.618 asserts 25 25 25 0 n/a 00:06:04.618 00:06:04.618 Elapsed time = 0.006 seconds 00:06:04.618 EAL: Cannot find device (10000:00:01.0) 00:06:04.618 EAL: Failed to attach device on primary process 00:06:04.618 00:06:04.618 real 0m0.059s 00:06:04.618 user 0m0.033s 00:06:04.618 sys 0m0.026s 00:06:04.618 18:12:50 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.618 ************************************ 00:06:04.618 END TEST env_pci 00:06:04.618 ************************************ 00:06:04.618 18:12:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:04.618 18:12:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:04.618 18:12:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:04.618 18:12:50 env -- env/env.sh@15 -- # uname 00:06:04.618 18:12:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:04.618 18:12:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:04.618 18:12:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.618 18:12:50 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:04.618 18:12:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.618 18:12:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.619 ************************************ 00:06:04.619 START TEST env_dpdk_post_init 00:06:04.619 ************************************ 00:06:04.619 18:12:50 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:04.619 EAL: Detected CPU lcores: 10 00:06:04.619 EAL: Detected NUMA nodes: 1 00:06:04.619 EAL: Detected shared linkage of DPDK 00:06:04.619 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.619 EAL: Selected IOVA mode 'PA' 00:06:04.884 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.884 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:04.884 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:04.884 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:04.884 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:04.884 Starting DPDK initialization... 00:06:04.884 Starting SPDK post initialization... 00:06:04.884 SPDK NVMe probe 00:06:04.884 Attaching to 0000:00:10.0 00:06:04.884 Attaching to 0000:00:11.0 00:06:04.884 Attaching to 0000:00:12.0 00:06:04.884 Attaching to 0000:00:13.0 00:06:04.884 Attached to 0000:00:11.0 00:06:04.884 Attached to 0000:00:13.0 00:06:04.884 Attached to 0000:00:10.0 00:06:04.884 Attached to 0000:00:12.0 00:06:04.884 Cleaning up... 00:06:04.884 00:06:04.884 real 0m0.207s 00:06:04.884 user 0m0.052s 00:06:04.884 sys 0m0.057s 00:06:04.884 18:12:51 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.884 18:12:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:04.884 ************************************ 00:06:04.884 END TEST env_dpdk_post_init 00:06:04.884 ************************************ 00:06:04.884 18:12:51 env -- common/autotest_common.sh@1142 -- # return 0 00:06:04.884 18:12:51 env -- env/env.sh@26 -- # uname 00:06:04.884 18:12:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:04.884 18:12:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.884 18:12:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.884 18:12:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.884 18:12:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.884 ************************************ 00:06:04.884 START TEST env_mem_callbacks 00:06:04.884 ************************************ 00:06:04.884 18:12:51 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:04.884 EAL: Detected CPU lcores: 10 00:06:04.884 EAL: Detected NUMA nodes: 1 00:06:04.884 EAL: Detected shared linkage of DPDK 00:06:04.884 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:04.884 EAL: Selected IOVA mode 'PA' 00:06:04.884 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.884 00:06:04.884 00:06:04.884 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.884 http://cunit.sourceforge.net/ 00:06:04.884 00:06:04.884 00:06:04.884 Suite: memory 00:06:04.884 Test: test ... 00:06:04.884 register 0x200000200000 2097152 00:06:04.884 malloc 3145728 00:06:04.884 register 0x200000400000 4194304 00:06:04.884 buf 0x200000500000 len 3145728 PASSED 00:06:04.884 malloc 64 00:06:04.884 buf 0x2000004fff40 len 64 PASSED 00:06:04.884 malloc 4194304 00:06:05.153 register 0x200000800000 6291456 00:06:05.153 buf 0x200000a00000 len 4194304 PASSED 00:06:05.153 free 0x200000500000 3145728 00:06:05.153 free 0x2000004fff40 64 00:06:05.153 unregister 0x200000400000 4194304 PASSED 00:06:05.153 free 0x200000a00000 4194304 00:06:05.153 unregister 0x200000800000 6291456 PASSED 00:06:05.153 malloc 8388608 00:06:05.153 register 0x200000400000 10485760 00:06:05.153 buf 0x200000600000 len 8388608 PASSED 00:06:05.153 free 0x200000600000 8388608 00:06:05.153 unregister 0x200000400000 10485760 PASSED 00:06:05.153 passed 00:06:05.153 00:06:05.153 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.153 suites 1 1 n/a 0 0 00:06:05.153 tests 1 1 1 0 0 00:06:05.153 asserts 15 15 15 0 n/a 00:06:05.153 00:06:05.153 Elapsed time = 0.009 seconds 00:06:05.153 00:06:05.153 real 0m0.167s 00:06:05.153 user 0m0.023s 00:06:05.153 sys 0m0.043s 00:06:05.153 18:12:51 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.153 18:12:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:05.153 ************************************ 00:06:05.153 END TEST env_mem_callbacks 00:06:05.153 ************************************ 00:06:05.153 18:12:51 env -- common/autotest_common.sh@1142 -- # return 0 00:06:05.153 00:06:05.153 real 0m2.487s 00:06:05.153 user 0m1.166s 00:06:05.153 sys 0m0.959s 00:06:05.153 18:12:51 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.153 18:12:51 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.153 ************************************ 00:06:05.153 END TEST env 00:06:05.153 ************************************ 00:06:05.153 18:12:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.153 18:12:51 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:05.153 18:12:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.153 18:12:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.153 18:12:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.153 ************************************ 00:06:05.153 START TEST rpc 00:06:05.153 ************************************ 00:06:05.153 18:12:51 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:05.153 * Looking for test storage... 00:06:05.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.153 18:12:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73980 00:06:05.153 18:12:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:05.153 18:12:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.153 18:12:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73980 00:06:05.153 18:12:51 rpc -- common/autotest_common.sh@829 -- # '[' -z 73980 ']' 00:06:05.153 18:12:51 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.153 18:12:51 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.153 18:12:51 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.153 18:12:51 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.153 18:12:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.413 [2024-07-11 18:12:51.603026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:05.413 [2024-07-11 18:12:51.603270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73980 ] 00:06:05.413 [2024-07-11 18:12:51.753030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.413 [2024-07-11 18:12:51.786671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.413 [2024-07-11 18:12:51.786738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73980' to capture a snapshot of events at runtime. 00:06:05.413 [2024-07-11 18:12:51.786769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.413 [2024-07-11 18:12:51.786795] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.413 [2024-07-11 18:12:51.786811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73980 for offline analysis/debug. 00:06:05.413 [2024-07-11 18:12:51.786852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.351 18:12:52 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.352 18:12:52 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.352 18:12:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.352 18:12:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.352 18:12:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:06.352 18:12:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:06.352 18:12:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.352 18:12:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.352 18:12:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.352 ************************************ 00:06:06.352 START TEST rpc_integrity 00:06:06.352 ************************************ 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.352 { 00:06:06.352 "name": "Malloc0", 00:06:06.352 "aliases": [ 00:06:06.352 "fa3cad95-0903-46be-9629-3eaf9c1bcaec" 00:06:06.352 ], 00:06:06.352 "product_name": "Malloc disk", 00:06:06.352 "block_size": 512, 00:06:06.352 "num_blocks": 16384, 00:06:06.352 "uuid": "fa3cad95-0903-46be-9629-3eaf9c1bcaec", 00:06:06.352 "assigned_rate_limits": { 00:06:06.352 "rw_ios_per_sec": 0, 00:06:06.352 "rw_mbytes_per_sec": 0, 00:06:06.352 "r_mbytes_per_sec": 0, 00:06:06.352 "w_mbytes_per_sec": 0 00:06:06.352 }, 00:06:06.352 "claimed": false, 00:06:06.352 "zoned": false, 00:06:06.352 "supported_io_types": { 00:06:06.352 "read": true, 00:06:06.352 "write": true, 00:06:06.352 "unmap": true, 00:06:06.352 "flush": true, 00:06:06.352 "reset": true, 00:06:06.352 "nvme_admin": false, 00:06:06.352 "nvme_io": false, 00:06:06.352 "nvme_io_md": false, 00:06:06.352 "write_zeroes": true, 00:06:06.352 "zcopy": true, 00:06:06.352 "get_zone_info": false, 00:06:06.352 "zone_management": false, 00:06:06.352 "zone_append": false, 00:06:06.352 "compare": false, 00:06:06.352 "compare_and_write": false, 00:06:06.352 "abort": true, 00:06:06.352 "seek_hole": false, 00:06:06.352 "seek_data": false, 00:06:06.352 "copy": true, 00:06:06.352 "nvme_iov_md": false 00:06:06.352 }, 00:06:06.352 "memory_domains": [ 00:06:06.352 { 00:06:06.352 "dma_device_id": "system", 00:06:06.352 "dma_device_type": 1 00:06:06.352 }, 00:06:06.352 { 00:06:06.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.352 "dma_device_type": 2 00:06:06.352 } 00:06:06.352 ], 00:06:06.352 "driver_specific": {} 00:06:06.352 } 00:06:06.352 ]' 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.352 [2024-07-11 18:12:52.708226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:06.352 [2024-07-11 18:12:52.708324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:06.352 [2024-07-11 18:12:52.708363] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:06.352 [2024-07-11 18:12:52.708383] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:06.352 [2024-07-11 18:12:52.711049] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:06.352 [2024-07-11 18:12:52.711134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:06.352 Passthru0 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.352 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:06.352 { 00:06:06.352 "name": "Malloc0", 00:06:06.352 "aliases": [ 00:06:06.352 "fa3cad95-0903-46be-9629-3eaf9c1bcaec" 00:06:06.352 ], 00:06:06.352 "product_name": "Malloc disk", 00:06:06.352 "block_size": 512, 00:06:06.352 "num_blocks": 16384, 00:06:06.352 "uuid": "fa3cad95-0903-46be-9629-3eaf9c1bcaec", 00:06:06.352 "assigned_rate_limits": { 00:06:06.352 "rw_ios_per_sec": 0, 00:06:06.352 "rw_mbytes_per_sec": 0, 00:06:06.352 "r_mbytes_per_sec": 0, 00:06:06.352 "w_mbytes_per_sec": 0 00:06:06.352 }, 00:06:06.352 "claimed": true, 00:06:06.352 "claim_type": "exclusive_write", 00:06:06.352 "zoned": false, 00:06:06.352 "supported_io_types": { 00:06:06.352 "read": true, 00:06:06.352 "write": true, 00:06:06.352 "unmap": true, 00:06:06.352 "flush": true, 00:06:06.352 "reset": true, 00:06:06.352 "nvme_admin": false, 00:06:06.352 "nvme_io": false, 00:06:06.352 "nvme_io_md": false, 00:06:06.352 "write_zeroes": true, 00:06:06.352 "zcopy": true, 00:06:06.352 "get_zone_info": false, 00:06:06.352 "zone_management": false, 00:06:06.352 "zone_append": false, 00:06:06.352 "compare": false, 00:06:06.352 "compare_and_write": false, 00:06:06.352 "abort": true, 00:06:06.352 "seek_hole": false, 00:06:06.352 "seek_data": false, 00:06:06.352 "copy": true, 00:06:06.352 "nvme_iov_md": false 00:06:06.352 }, 00:06:06.352 "memory_domains": [ 00:06:06.352 { 00:06:06.352 "dma_device_id": "system", 00:06:06.352 "dma_device_type": 1 00:06:06.352 }, 00:06:06.352 { 00:06:06.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.352 "dma_device_type": 2 00:06:06.352 } 00:06:06.352 ], 00:06:06.352 "driver_specific": {} 00:06:06.352 }, 00:06:06.352 { 00:06:06.352 "name": "Passthru0", 00:06:06.352 "aliases": [ 00:06:06.352 "dc4f557c-d3ff-5f2c-8a2b-d226b96ba5cf" 00:06:06.352 ], 00:06:06.352 "product_name": "passthru", 00:06:06.352 "block_size": 512, 00:06:06.352 "num_blocks": 16384, 00:06:06.352 "uuid": "dc4f557c-d3ff-5f2c-8a2b-d226b96ba5cf", 00:06:06.352 "assigned_rate_limits": { 00:06:06.352 "rw_ios_per_sec": 0, 00:06:06.352 "rw_mbytes_per_sec": 0, 00:06:06.352 "r_mbytes_per_sec": 0, 00:06:06.352 "w_mbytes_per_sec": 0 00:06:06.352 }, 00:06:06.352 "claimed": false, 00:06:06.352 "zoned": false, 00:06:06.352 "supported_io_types": { 00:06:06.352 "read": true, 00:06:06.352 "write": true, 00:06:06.352 "unmap": true, 00:06:06.352 "flush": true, 00:06:06.352 "reset": true, 00:06:06.352 "nvme_admin": false, 00:06:06.352 "nvme_io": false, 00:06:06.352 "nvme_io_md": false, 00:06:06.352 "write_zeroes": true, 00:06:06.352 "zcopy": true, 00:06:06.352 "get_zone_info": false, 00:06:06.352 "zone_management": false, 00:06:06.352 "zone_append": false, 00:06:06.352 "compare": false, 00:06:06.352 "compare_and_write": false, 00:06:06.352 "abort": true, 00:06:06.352 "seek_hole": false, 00:06:06.352 "seek_data": false, 00:06:06.352 "copy": true, 00:06:06.352 "nvme_iov_md": false 00:06:06.352 }, 00:06:06.352 "memory_domains": [ 00:06:06.352 { 00:06:06.352 "dma_device_id": "system", 00:06:06.352 "dma_device_type": 1 00:06:06.352 }, 00:06:06.352 { 00:06:06.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.352 "dma_device_type": 2 00:06:06.352 } 00:06:06.352 ], 00:06:06.352 "driver_specific": { 00:06:06.352 "passthru": { 00:06:06.352 "name": "Passthru0", 00:06:06.352 "base_bdev_name": "Malloc0" 00:06:06.352 } 00:06:06.352 } 00:06:06.352 } 00:06:06.352 ]' 00:06:06.352 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:06.612 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:06.612 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.612 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.612 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.612 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:06.612 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:06.612 18:12:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:06.612 00:06:06.612 real 0m0.329s 00:06:06.612 user 0m0.209s 00:06:06.612 sys 0m0.046s 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.612 ************************************ 00:06:06.612 END TEST rpc_integrity 00:06:06.612 18:12:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 ************************************ 00:06:06.612 18:12:52 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.612 18:12:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:06.612 18:12:52 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.612 18:12:52 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.612 18:12:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 ************************************ 00:06:06.612 START TEST rpc_plugins 00:06:06.612 ************************************ 00:06:06.612 18:12:52 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:06.612 18:12:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:06.612 18:12:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.612 18:12:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 18:12:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.612 18:12:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:06.612 18:12:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:06.612 18:12:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.612 18:12:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.612 18:12:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.612 18:12:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:06.612 { 00:06:06.612 "name": "Malloc1", 00:06:06.612 "aliases": [ 00:06:06.612 "87641d2d-99c7-4c78-8f2c-b60c179b97b7" 00:06:06.612 ], 00:06:06.612 "product_name": "Malloc disk", 00:06:06.612 "block_size": 4096, 00:06:06.612 "num_blocks": 256, 00:06:06.612 "uuid": "87641d2d-99c7-4c78-8f2c-b60c179b97b7", 00:06:06.612 "assigned_rate_limits": { 00:06:06.612 "rw_ios_per_sec": 0, 00:06:06.612 "rw_mbytes_per_sec": 0, 00:06:06.612 "r_mbytes_per_sec": 0, 00:06:06.612 "w_mbytes_per_sec": 0 00:06:06.612 }, 00:06:06.612 "claimed": false, 00:06:06.612 "zoned": false, 00:06:06.612 "supported_io_types": { 00:06:06.612 "read": true, 00:06:06.612 "write": true, 00:06:06.612 "unmap": true, 00:06:06.612 "flush": true, 00:06:06.612 "reset": true, 00:06:06.612 "nvme_admin": false, 00:06:06.612 "nvme_io": false, 00:06:06.612 "nvme_io_md": false, 00:06:06.612 "write_zeroes": true, 00:06:06.612 "zcopy": true, 00:06:06.612 "get_zone_info": false, 00:06:06.612 "zone_management": false, 00:06:06.612 "zone_append": false, 00:06:06.612 "compare": false, 00:06:06.612 "compare_and_write": false, 00:06:06.612 "abort": true, 00:06:06.612 "seek_hole": false, 00:06:06.612 "seek_data": false, 00:06:06.612 "copy": true, 00:06:06.612 "nvme_iov_md": false 00:06:06.612 }, 00:06:06.612 "memory_domains": [ 00:06:06.612 { 00:06:06.612 "dma_device_id": "system", 00:06:06.612 "dma_device_type": 1 00:06:06.612 }, 00:06:06.612 { 00:06:06.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.612 "dma_device_type": 2 00:06:06.612 } 00:06:06.612 ], 00:06:06.612 "driver_specific": {} 00:06:06.612 } 00:06:06.612 ]' 00:06:06.612 18:12:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:06.871 18:12:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:06.871 18:12:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.871 18:12:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.871 18:12:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:06.871 18:12:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:06.871 18:12:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:06.871 00:06:06.871 real 0m0.169s 00:06:06.871 user 0m0.106s 00:06:06.871 sys 0m0.024s 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.871 18:12:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:06.871 ************************************ 00:06:06.871 END TEST rpc_plugins 00:06:06.871 ************************************ 00:06:06.871 18:12:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:06.871 18:12:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:06.871 18:12:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.871 18:12:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.871 18:12:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.871 ************************************ 00:06:06.871 START TEST rpc_trace_cmd_test 00:06:06.871 ************************************ 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:06.871 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73980", 00:06:06.871 "tpoint_group_mask": "0x8", 00:06:06.871 "iscsi_conn": { 00:06:06.871 "mask": "0x2", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "scsi": { 00:06:06.871 "mask": "0x4", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "bdev": { 00:06:06.871 "mask": "0x8", 00:06:06.871 "tpoint_mask": "0xffffffffffffffff" 00:06:06.871 }, 00:06:06.871 "nvmf_rdma": { 00:06:06.871 "mask": "0x10", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "nvmf_tcp": { 00:06:06.871 "mask": "0x20", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "ftl": { 00:06:06.871 "mask": "0x40", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "blobfs": { 00:06:06.871 "mask": "0x80", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "dsa": { 00:06:06.871 "mask": "0x200", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "thread": { 00:06:06.871 "mask": "0x400", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "nvme_pcie": { 00:06:06.871 "mask": "0x800", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "iaa": { 00:06:06.871 "mask": "0x1000", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "nvme_tcp": { 00:06:06.871 "mask": "0x2000", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "bdev_nvme": { 00:06:06.871 "mask": "0x4000", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 }, 00:06:06.871 "sock": { 00:06:06.871 "mask": "0x8000", 00:06:06.871 "tpoint_mask": "0x0" 00:06:06.871 } 00:06:06.871 }' 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:06.871 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:07.129 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:07.129 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:07.129 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:07.129 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:07.129 18:12:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:07.129 00:06:07.129 real 0m0.267s 00:06:07.129 user 0m0.236s 00:06:07.129 sys 0m0.024s 00:06:07.129 18:12:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.129 18:12:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.129 ************************************ 00:06:07.129 END TEST rpc_trace_cmd_test 00:06:07.129 ************************************ 00:06:07.129 18:12:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:07.129 18:12:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:07.129 18:12:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:07.129 18:12:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:07.129 18:12:53 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.129 18:12:53 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.129 18:12:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.129 ************************************ 00:06:07.129 START TEST rpc_daemon_integrity 00:06:07.129 ************************************ 00:06:07.129 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:07.129 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.129 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.129 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.129 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.129 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.129 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.388 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.388 { 00:06:07.388 "name": "Malloc2", 00:06:07.388 "aliases": [ 00:06:07.388 "93cf1b00-8c7d-43c5-8e3b-280139f6a95e" 00:06:07.388 ], 00:06:07.388 "product_name": "Malloc disk", 00:06:07.388 "block_size": 512, 00:06:07.388 "num_blocks": 16384, 00:06:07.388 "uuid": "93cf1b00-8c7d-43c5-8e3b-280139f6a95e", 00:06:07.388 "assigned_rate_limits": { 00:06:07.388 "rw_ios_per_sec": 0, 00:06:07.388 "rw_mbytes_per_sec": 0, 00:06:07.388 "r_mbytes_per_sec": 0, 00:06:07.388 "w_mbytes_per_sec": 0 00:06:07.388 }, 00:06:07.388 "claimed": false, 00:06:07.388 "zoned": false, 00:06:07.388 "supported_io_types": { 00:06:07.388 "read": true, 00:06:07.388 "write": true, 00:06:07.388 "unmap": true, 00:06:07.388 "flush": true, 00:06:07.388 "reset": true, 00:06:07.388 "nvme_admin": false, 00:06:07.388 "nvme_io": false, 00:06:07.388 "nvme_io_md": false, 00:06:07.388 "write_zeroes": true, 00:06:07.389 "zcopy": true, 00:06:07.389 "get_zone_info": false, 00:06:07.389 "zone_management": false, 00:06:07.389 "zone_append": false, 00:06:07.389 "compare": false, 00:06:07.389 "compare_and_write": false, 00:06:07.389 "abort": true, 00:06:07.389 "seek_hole": false, 00:06:07.389 "seek_data": false, 00:06:07.389 "copy": true, 00:06:07.389 "nvme_iov_md": false 00:06:07.389 }, 00:06:07.389 "memory_domains": [ 00:06:07.389 { 00:06:07.389 "dma_device_id": "system", 00:06:07.389 "dma_device_type": 1 00:06:07.389 }, 00:06:07.389 { 00:06:07.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.389 "dma_device_type": 2 00:06:07.389 } 00:06:07.389 ], 00:06:07.389 "driver_specific": {} 00:06:07.389 } 00:06:07.389 ]' 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.389 [2024-07-11 18:12:53.636931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:07.389 [2024-07-11 18:12:53.637027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.389 [2024-07-11 18:12:53.637055] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:07.389 [2024-07-11 18:12:53.637071] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.389 [2024-07-11 18:12:53.639717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.389 [2024-07-11 18:12:53.639792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.389 Passthru0 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.389 { 00:06:07.389 "name": "Malloc2", 00:06:07.389 "aliases": [ 00:06:07.389 "93cf1b00-8c7d-43c5-8e3b-280139f6a95e" 00:06:07.389 ], 00:06:07.389 "product_name": "Malloc disk", 00:06:07.389 "block_size": 512, 00:06:07.389 "num_blocks": 16384, 00:06:07.389 "uuid": "93cf1b00-8c7d-43c5-8e3b-280139f6a95e", 00:06:07.389 "assigned_rate_limits": { 00:06:07.389 "rw_ios_per_sec": 0, 00:06:07.389 "rw_mbytes_per_sec": 0, 00:06:07.389 "r_mbytes_per_sec": 0, 00:06:07.389 "w_mbytes_per_sec": 0 00:06:07.389 }, 00:06:07.389 "claimed": true, 00:06:07.389 "claim_type": "exclusive_write", 00:06:07.389 "zoned": false, 00:06:07.389 "supported_io_types": { 00:06:07.389 "read": true, 00:06:07.389 "write": true, 00:06:07.389 "unmap": true, 00:06:07.389 "flush": true, 00:06:07.389 "reset": true, 00:06:07.389 "nvme_admin": false, 00:06:07.389 "nvme_io": false, 00:06:07.389 "nvme_io_md": false, 00:06:07.389 "write_zeroes": true, 00:06:07.389 "zcopy": true, 00:06:07.389 "get_zone_info": false, 00:06:07.389 "zone_management": false, 00:06:07.389 "zone_append": false, 00:06:07.389 "compare": false, 00:06:07.389 "compare_and_write": false, 00:06:07.389 "abort": true, 00:06:07.389 "seek_hole": false, 00:06:07.389 "seek_data": false, 00:06:07.389 "copy": true, 00:06:07.389 "nvme_iov_md": false 00:06:07.389 }, 00:06:07.389 "memory_domains": [ 00:06:07.389 { 00:06:07.389 "dma_device_id": "system", 00:06:07.389 "dma_device_type": 1 00:06:07.389 }, 00:06:07.389 { 00:06:07.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.389 "dma_device_type": 2 00:06:07.389 } 00:06:07.389 ], 00:06:07.389 "driver_specific": {} 00:06:07.389 }, 00:06:07.389 { 00:06:07.389 "name": "Passthru0", 00:06:07.389 "aliases": [ 00:06:07.389 "be3b2d9f-64b7-54a8-be46-465f6f878c89" 00:06:07.389 ], 00:06:07.389 "product_name": "passthru", 00:06:07.389 "block_size": 512, 00:06:07.389 "num_blocks": 16384, 00:06:07.389 "uuid": "be3b2d9f-64b7-54a8-be46-465f6f878c89", 00:06:07.389 "assigned_rate_limits": { 00:06:07.389 "rw_ios_per_sec": 0, 00:06:07.389 "rw_mbytes_per_sec": 0, 00:06:07.389 "r_mbytes_per_sec": 0, 00:06:07.389 "w_mbytes_per_sec": 0 00:06:07.389 }, 00:06:07.389 "claimed": false, 00:06:07.389 "zoned": false, 00:06:07.389 "supported_io_types": { 00:06:07.389 "read": true, 00:06:07.389 "write": true, 00:06:07.389 "unmap": true, 00:06:07.389 "flush": true, 00:06:07.389 "reset": true, 00:06:07.389 "nvme_admin": false, 00:06:07.389 "nvme_io": false, 00:06:07.389 "nvme_io_md": false, 00:06:07.389 "write_zeroes": true, 00:06:07.389 "zcopy": true, 00:06:07.389 "get_zone_info": false, 00:06:07.389 "zone_management": false, 00:06:07.389 "zone_append": false, 00:06:07.389 "compare": false, 00:06:07.389 "compare_and_write": false, 00:06:07.389 "abort": true, 00:06:07.389 "seek_hole": false, 00:06:07.389 "seek_data": false, 00:06:07.389 "copy": true, 00:06:07.389 "nvme_iov_md": false 00:06:07.389 }, 00:06:07.389 "memory_domains": [ 00:06:07.389 { 00:06:07.389 "dma_device_id": "system", 00:06:07.389 "dma_device_type": 1 00:06:07.389 }, 00:06:07.389 { 00:06:07.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.389 "dma_device_type": 2 00:06:07.389 } 00:06:07.389 ], 00:06:07.389 "driver_specific": { 00:06:07.389 "passthru": { 00:06:07.389 "name": "Passthru0", 00:06:07.389 "base_bdev_name": "Malloc2" 00:06:07.389 } 00:06:07.389 } 00:06:07.389 } 00:06:07.389 ]' 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.389 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:07.649 18:12:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.649 00:06:07.649 real 0m0.331s 00:06:07.649 user 0m0.221s 00:06:07.649 sys 0m0.040s 00:06:07.649 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.649 18:12:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.649 ************************************ 00:06:07.649 END TEST rpc_daemon_integrity 00:06:07.649 ************************************ 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:07.649 18:12:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:07.649 18:12:53 rpc -- rpc/rpc.sh@84 -- # killprocess 73980 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@948 -- # '[' -z 73980 ']' 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@952 -- # kill -0 73980 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@953 -- # uname 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73980 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.649 killing process with pid 73980 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73980' 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@967 -- # kill 73980 00:06:07.649 18:12:53 rpc -- common/autotest_common.sh@972 -- # wait 73980 00:06:07.908 00:06:07.908 real 0m2.736s 00:06:07.908 user 0m3.647s 00:06:07.908 sys 0m0.629s 00:06:07.908 18:12:54 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.908 18:12:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.908 ************************************ 00:06:07.908 END TEST rpc 00:06:07.908 ************************************ 00:06:07.908 18:12:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.908 18:12:54 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:07.908 18:12:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.908 18:12:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.908 18:12:54 -- common/autotest_common.sh@10 -- # set +x 00:06:07.909 ************************************ 00:06:07.909 START TEST skip_rpc 00:06:07.909 ************************************ 00:06:07.909 18:12:54 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:07.909 * Looking for test storage... 00:06:07.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.909 18:12:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:07.909 18:12:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:07.909 18:12:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:07.909 18:12:54 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.909 18:12:54 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.909 18:12:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.909 ************************************ 00:06:07.909 START TEST skip_rpc 00:06:07.909 ************************************ 00:06:07.909 18:12:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:07.909 18:12:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74179 00:06:07.909 18:12:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:07.909 18:12:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.909 18:12:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:08.169 [2024-07-11 18:12:54.391784] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:08.169 [2024-07-11 18:12:54.391977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74179 ] 00:06:08.169 [2024-07-11 18:12:54.539523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.169 [2024-07-11 18:12:54.571635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74179 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 74179 ']' 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 74179 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74179 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.439 killing process with pid 74179 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74179' 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 74179 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 74179 00:06:13.439 00:06:13.439 real 0m5.314s 00:06:13.439 user 0m4.967s 00:06:13.439 sys 0m0.242s 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.439 18:12:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.439 ************************************ 00:06:13.439 END TEST skip_rpc 00:06:13.439 ************************************ 00:06:13.439 18:12:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:13.439 18:12:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:13.439 18:12:59 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.439 18:12:59 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.439 18:12:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.439 ************************************ 00:06:13.439 START TEST skip_rpc_with_json 00:06:13.439 ************************************ 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74261 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74261 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 74261 ']' 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.439 18:12:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:13.439 [2024-07-11 18:12:59.752869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:13.439 [2024-07-11 18:12:59.753858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74261 ] 00:06:13.698 [2024-07-11 18:12:59.903778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.698 [2024-07-11 18:12:59.936042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.266 [2024-07-11 18:13:00.611495] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:14.266 request: 00:06:14.266 { 00:06:14.266 "trtype": "tcp", 00:06:14.266 "method": "nvmf_get_transports", 00:06:14.266 "req_id": 1 00:06:14.266 } 00:06:14.266 Got JSON-RPC error response 00:06:14.266 response: 00:06:14.266 { 00:06:14.266 "code": -19, 00:06:14.266 "message": "No such device" 00:06:14.266 } 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.266 [2024-07-11 18:13:00.623602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.266 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.525 { 00:06:14.525 "subsystems": [ 00:06:14.525 { 00:06:14.525 "subsystem": "keyring", 00:06:14.525 "config": [] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "iobuf", 00:06:14.525 "config": [ 00:06:14.525 { 00:06:14.525 "method": "iobuf_set_options", 00:06:14.525 "params": { 00:06:14.525 "small_pool_count": 8192, 00:06:14.525 "large_pool_count": 1024, 00:06:14.525 "small_bufsize": 8192, 00:06:14.525 "large_bufsize": 135168 00:06:14.525 } 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "sock", 00:06:14.525 "config": [ 00:06:14.525 { 00:06:14.525 "method": "sock_set_default_impl", 00:06:14.525 "params": { 00:06:14.525 "impl_name": "posix" 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "sock_impl_set_options", 00:06:14.525 "params": { 00:06:14.525 "impl_name": "ssl", 00:06:14.525 "recv_buf_size": 4096, 00:06:14.525 "send_buf_size": 4096, 00:06:14.525 "enable_recv_pipe": true, 00:06:14.525 "enable_quickack": false, 00:06:14.525 "enable_placement_id": 0, 00:06:14.525 "enable_zerocopy_send_server": true, 00:06:14.525 "enable_zerocopy_send_client": false, 00:06:14.525 "zerocopy_threshold": 0, 00:06:14.525 "tls_version": 0, 00:06:14.525 "enable_ktls": false 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "sock_impl_set_options", 00:06:14.525 "params": { 00:06:14.525 "impl_name": "posix", 00:06:14.525 "recv_buf_size": 2097152, 00:06:14.525 "send_buf_size": 2097152, 00:06:14.525 "enable_recv_pipe": true, 00:06:14.525 "enable_quickack": false, 00:06:14.525 "enable_placement_id": 0, 00:06:14.525 "enable_zerocopy_send_server": true, 00:06:14.525 "enable_zerocopy_send_client": false, 00:06:14.525 "zerocopy_threshold": 0, 00:06:14.525 "tls_version": 0, 00:06:14.525 "enable_ktls": false 00:06:14.525 } 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "vmd", 00:06:14.525 "config": [] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "accel", 00:06:14.525 "config": [ 00:06:14.525 { 00:06:14.525 "method": "accel_set_options", 00:06:14.525 "params": { 00:06:14.525 "small_cache_size": 128, 00:06:14.525 "large_cache_size": 16, 00:06:14.525 "task_count": 2048, 00:06:14.525 "sequence_count": 2048, 00:06:14.525 "buf_count": 2048 00:06:14.525 } 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "bdev", 00:06:14.525 "config": [ 00:06:14.525 { 00:06:14.525 "method": "bdev_set_options", 00:06:14.525 "params": { 00:06:14.525 "bdev_io_pool_size": 65535, 00:06:14.525 "bdev_io_cache_size": 256, 00:06:14.525 "bdev_auto_examine": true, 00:06:14.525 "iobuf_small_cache_size": 128, 00:06:14.525 "iobuf_large_cache_size": 16 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "bdev_raid_set_options", 00:06:14.525 "params": { 00:06:14.525 "process_window_size_kb": 1024 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "bdev_iscsi_set_options", 00:06:14.525 "params": { 00:06:14.525 "timeout_sec": 30 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "bdev_nvme_set_options", 00:06:14.525 "params": { 00:06:14.525 "action_on_timeout": "none", 00:06:14.525 "timeout_us": 0, 00:06:14.525 "timeout_admin_us": 0, 00:06:14.525 "keep_alive_timeout_ms": 10000, 00:06:14.525 "arbitration_burst": 0, 00:06:14.525 "low_priority_weight": 0, 00:06:14.525 "medium_priority_weight": 0, 00:06:14.525 "high_priority_weight": 0, 00:06:14.525 "nvme_adminq_poll_period_us": 10000, 00:06:14.525 "nvme_ioq_poll_period_us": 0, 00:06:14.525 "io_queue_requests": 0, 00:06:14.525 "delay_cmd_submit": true, 00:06:14.525 "transport_retry_count": 4, 00:06:14.525 "bdev_retry_count": 3, 00:06:14.525 "transport_ack_timeout": 0, 00:06:14.525 "ctrlr_loss_timeout_sec": 0, 00:06:14.525 "reconnect_delay_sec": 0, 00:06:14.525 "fast_io_fail_timeout_sec": 0, 00:06:14.525 "disable_auto_failback": false, 00:06:14.525 "generate_uuids": false, 00:06:14.525 "transport_tos": 0, 00:06:14.525 "nvme_error_stat": false, 00:06:14.525 "rdma_srq_size": 0, 00:06:14.525 "io_path_stat": false, 00:06:14.525 "allow_accel_sequence": false, 00:06:14.525 "rdma_max_cq_size": 0, 00:06:14.525 "rdma_cm_event_timeout_ms": 0, 00:06:14.525 "dhchap_digests": [ 00:06:14.525 "sha256", 00:06:14.525 "sha384", 00:06:14.525 "sha512" 00:06:14.525 ], 00:06:14.525 "dhchap_dhgroups": [ 00:06:14.525 "null", 00:06:14.525 "ffdhe2048", 00:06:14.525 "ffdhe3072", 00:06:14.525 "ffdhe4096", 00:06:14.525 "ffdhe6144", 00:06:14.525 "ffdhe8192" 00:06:14.525 ] 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "bdev_nvme_set_hotplug", 00:06:14.525 "params": { 00:06:14.525 "period_us": 100000, 00:06:14.525 "enable": false 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "bdev_wait_for_examine" 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "scsi", 00:06:14.525 "config": null 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "scheduler", 00:06:14.525 "config": [ 00:06:14.525 { 00:06:14.525 "method": "framework_set_scheduler", 00:06:14.525 "params": { 00:06:14.525 "name": "static" 00:06:14.525 } 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "vhost_scsi", 00:06:14.525 "config": [] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "vhost_blk", 00:06:14.525 "config": [] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "ublk", 00:06:14.525 "config": [] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "nbd", 00:06:14.525 "config": [] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "nvmf", 00:06:14.525 "config": [ 00:06:14.525 { 00:06:14.525 "method": "nvmf_set_config", 00:06:14.525 "params": { 00:06:14.525 "discovery_filter": "match_any", 00:06:14.525 "admin_cmd_passthru": { 00:06:14.525 "identify_ctrlr": false 00:06:14.525 } 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "nvmf_set_max_subsystems", 00:06:14.525 "params": { 00:06:14.525 "max_subsystems": 1024 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "nvmf_set_crdt", 00:06:14.525 "params": { 00:06:14.525 "crdt1": 0, 00:06:14.525 "crdt2": 0, 00:06:14.525 "crdt3": 0 00:06:14.525 } 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "method": "nvmf_create_transport", 00:06:14.525 "params": { 00:06:14.525 "trtype": "TCP", 00:06:14.525 "max_queue_depth": 128, 00:06:14.525 "max_io_qpairs_per_ctrlr": 127, 00:06:14.525 "in_capsule_data_size": 4096, 00:06:14.525 "max_io_size": 131072, 00:06:14.525 "io_unit_size": 131072, 00:06:14.525 "max_aq_depth": 128, 00:06:14.525 "num_shared_buffers": 511, 00:06:14.525 "buf_cache_size": 4294967295, 00:06:14.525 "dif_insert_or_strip": false, 00:06:14.525 "zcopy": false, 00:06:14.525 "c2h_success": true, 00:06:14.525 "sock_priority": 0, 00:06:14.525 "abort_timeout_sec": 1, 00:06:14.525 "ack_timeout": 0, 00:06:14.525 "data_wr_pool_size": 0 00:06:14.525 } 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 }, 00:06:14.525 { 00:06:14.525 "subsystem": "iscsi", 00:06:14.525 "config": [ 00:06:14.525 { 00:06:14.525 "method": "iscsi_set_options", 00:06:14.525 "params": { 00:06:14.525 "node_base": "iqn.2016-06.io.spdk", 00:06:14.525 "max_sessions": 128, 00:06:14.525 "max_connections_per_session": 2, 00:06:14.525 "max_queue_depth": 64, 00:06:14.525 "default_time2wait": 2, 00:06:14.525 "default_time2retain": 20, 00:06:14.525 "first_burst_length": 8192, 00:06:14.525 "immediate_data": true, 00:06:14.525 "allow_duplicated_isid": false, 00:06:14.525 "error_recovery_level": 0, 00:06:14.525 "nop_timeout": 60, 00:06:14.525 "nop_in_interval": 30, 00:06:14.525 "disable_chap": false, 00:06:14.525 "require_chap": false, 00:06:14.525 "mutual_chap": false, 00:06:14.525 "chap_group": 0, 00:06:14.525 "max_large_datain_per_connection": 64, 00:06:14.525 "max_r2t_per_connection": 4, 00:06:14.525 "pdu_pool_size": 36864, 00:06:14.525 "immediate_data_pool_size": 16384, 00:06:14.525 "data_out_pool_size": 2048 00:06:14.525 } 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 } 00:06:14.525 ] 00:06:14.525 } 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74261 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74261 ']' 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74261 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74261 00:06:14.525 killing process with pid 74261 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74261' 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74261 00:06:14.525 18:13:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74261 00:06:14.784 18:13:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74290 00:06:14.784 18:13:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.784 18:13:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74290 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74290 ']' 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74290 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74290 00:06:20.065 killing process with pid 74290 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74290' 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74290 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74290 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.065 ************************************ 00:06:20.065 END TEST skip_rpc_with_json 00:06:20.065 ************************************ 00:06:20.065 00:06:20.065 real 0m6.753s 00:06:20.065 user 0m6.493s 00:06:20.065 sys 0m0.560s 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.065 18:13:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.065 18:13:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:20.065 18:13:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.065 18:13:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.065 18:13:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.065 ************************************ 00:06:20.065 START TEST skip_rpc_with_delay 00:06:20.065 ************************************ 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:20.065 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.325 [2024-07-11 18:13:06.553312] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:20.325 [2024-07-11 18:13:06.553541] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:20.325 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:20.325 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.325 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.325 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.325 00:06:20.325 real 0m0.179s 00:06:20.325 user 0m0.096s 00:06:20.325 sys 0m0.080s 00:06:20.325 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.325 18:13:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:20.325 ************************************ 00:06:20.325 END TEST skip_rpc_with_delay 00:06:20.325 ************************************ 00:06:20.325 18:13:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.325 18:13:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:20.325 18:13:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:20.325 18:13:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:20.325 18:13:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.325 18:13:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.325 18:13:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.325 ************************************ 00:06:20.325 START TEST exit_on_failed_rpc_init 00:06:20.325 ************************************ 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74401 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74401 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 74401 ']' 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.325 18:13:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.584 [2024-07-11 18:13:06.788831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:20.584 [2024-07-11 18:13:06.789014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74401 ] 00:06:20.584 [2024-07-11 18:13:06.937813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.584 [2024-07-11 18:13:06.971850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:21.522 18:13:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.522 [2024-07-11 18:13:07.823757] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:21.522 [2024-07-11 18:13:07.823950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74418 ] 00:06:21.781 [2024-07-11 18:13:07.975816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.781 [2024-07-11 18:13:08.018731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.781 [2024-07-11 18:13:08.018909] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:21.781 [2024-07-11 18:13:08.018958] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:21.781 [2024-07-11 18:13:08.018983] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74401 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 74401 ']' 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 74401 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74401 00:06:21.781 killing process with pid 74401 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74401' 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 74401 00:06:21.781 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 74401 00:06:22.041 00:06:22.041 real 0m1.769s 00:06:22.041 user 0m2.122s 00:06:22.041 sys 0m0.410s 00:06:22.041 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.041 ************************************ 00:06:22.041 END TEST exit_on_failed_rpc_init 00:06:22.041 ************************************ 00:06:22.041 18:13:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.304 18:13:08 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:22.304 18:13:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:22.304 00:06:22.304 real 0m14.304s 00:06:22.304 user 0m13.783s 00:06:22.304 sys 0m1.463s 00:06:22.304 18:13:08 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.304 ************************************ 00:06:22.304 END TEST skip_rpc 00:06:22.304 ************************************ 00:06:22.304 18:13:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.304 18:13:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.304 18:13:08 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.304 18:13:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.304 18:13:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.304 18:13:08 -- common/autotest_common.sh@10 -- # set +x 00:06:22.304 ************************************ 00:06:22.304 START TEST rpc_client 00:06:22.304 ************************************ 00:06:22.304 18:13:08 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.304 * Looking for test storage... 00:06:22.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:22.304 18:13:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:22.304 OK 00:06:22.304 18:13:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:22.304 00:06:22.304 real 0m0.126s 00:06:22.304 user 0m0.063s 00:06:22.304 sys 0m0.071s 00:06:22.304 ************************************ 00:06:22.304 END TEST rpc_client 00:06:22.304 ************************************ 00:06:22.304 18:13:08 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.304 18:13:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:22.304 18:13:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.304 18:13:08 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:22.304 18:13:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.304 18:13:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.304 18:13:08 -- common/autotest_common.sh@10 -- # set +x 00:06:22.587 ************************************ 00:06:22.587 START TEST json_config 00:06:22.587 ************************************ 00:06:22.587 18:13:08 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1fa35760-e429-4362-9ca7-dc58b037cb44 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1fa35760-e429-4362-9ca7-dc58b037cb44 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.587 18:13:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.587 18:13:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.587 18:13:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.587 18:13:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.587 18:13:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.587 18:13:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.587 18:13:08 json_config -- paths/export.sh@5 -- # export PATH 00:06:22.587 18:13:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@47 -- # : 0 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:22.587 18:13:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:22.587 WARNING: No tests are enabled so not running JSON configuration tests 00:06:22.587 18:13:08 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:22.587 00:06:22.587 real 0m0.085s 00:06:22.587 user 0m0.042s 00:06:22.587 sys 0m0.034s 00:06:22.587 18:13:08 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.587 18:13:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.587 ************************************ 00:06:22.587 END TEST json_config 00:06:22.587 ************************************ 00:06:22.587 18:13:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:22.587 18:13:08 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:22.587 18:13:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.587 18:13:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.587 18:13:08 -- common/autotest_common.sh@10 -- # set +x 00:06:22.587 ************************************ 00:06:22.587 START TEST json_config_extra_key 00:06:22.587 ************************************ 00:06:22.587 18:13:08 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:22.587 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1fa35760-e429-4362-9ca7-dc58b037cb44 00:06:22.587 18:13:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1fa35760-e429-4362-9ca7-dc58b037cb44 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.588 18:13:08 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.588 18:13:08 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.588 18:13:08 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.588 18:13:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.588 18:13:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.588 18:13:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.588 18:13:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:22.588 18:13:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:22.588 18:13:08 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:22.588 INFO: launching applications... 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:22.588 18:13:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.588 Waiting for target to run... 00:06:22.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74572 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74572 /var/tmp/spdk_tgt.sock 00:06:22.588 18:13:08 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 74572 ']' 00:06:22.588 18:13:08 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.588 18:13:08 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:22.588 18:13:08 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.588 18:13:08 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.588 18:13:08 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.588 18:13:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:22.854 [2024-07-11 18:13:09.043366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:22.854 [2024-07-11 18:13:09.043580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74572 ] 00:06:23.113 [2024-07-11 18:13:09.354524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.113 [2024-07-11 18:13:09.375281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.680 00:06:23.680 INFO: shutting down applications... 00:06:23.680 18:13:09 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.680 18:13:09 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:23.680 18:13:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:23.680 18:13:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74572 ]] 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74572 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74572 00:06:23.680 18:13:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.248 18:13:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.248 18:13:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.248 18:13:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74572 00:06:24.248 18:13:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:24.248 18:13:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:24.248 18:13:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:24.248 18:13:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:24.248 SPDK target shutdown done 00:06:24.248 18:13:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:24.248 Success 00:06:24.248 00:06:24.248 real 0m1.552s 00:06:24.248 user 0m1.297s 00:06:24.248 sys 0m0.361s 00:06:24.248 18:13:10 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.248 18:13:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:24.248 ************************************ 00:06:24.248 END TEST json_config_extra_key 00:06:24.248 ************************************ 00:06:24.248 18:13:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.248 18:13:10 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.248 18:13:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.248 18:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.248 18:13:10 -- common/autotest_common.sh@10 -- # set +x 00:06:24.248 ************************************ 00:06:24.248 START TEST alias_rpc 00:06:24.248 ************************************ 00:06:24.248 18:13:10 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.248 * Looking for test storage... 00:06:24.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:24.248 18:13:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.248 18:13:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74638 00:06:24.248 18:13:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74638 00:06:24.248 18:13:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.248 18:13:10 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 74638 ']' 00:06:24.248 18:13:10 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.248 18:13:10 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.248 18:13:10 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.248 18:13:10 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.248 18:13:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.248 [2024-07-11 18:13:10.649472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:24.249 [2024-07-11 18:13:10.649662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74638 ] 00:06:24.508 [2024-07-11 18:13:10.798769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.508 [2024-07-11 18:13:10.838632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.444 18:13:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:25.444 18:13:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74638 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 74638 ']' 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 74638 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74638 00:06:25.444 killing process with pid 74638 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74638' 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@967 -- # kill 74638 00:06:25.444 18:13:11 alias_rpc -- common/autotest_common.sh@972 -- # wait 74638 00:06:25.702 ************************************ 00:06:25.702 END TEST alias_rpc 00:06:25.702 ************************************ 00:06:25.702 00:06:25.702 real 0m1.630s 00:06:25.702 user 0m1.896s 00:06:25.702 sys 0m0.360s 00:06:25.702 18:13:12 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.702 18:13:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.962 18:13:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:25.962 18:13:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:25.962 18:13:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:25.962 18:13:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.962 18:13:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.962 18:13:12 -- common/autotest_common.sh@10 -- # set +x 00:06:25.962 ************************************ 00:06:25.962 START TEST spdkcli_tcp 00:06:25.962 ************************************ 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:25.962 * Looking for test storage... 00:06:25.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=74715 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:25.962 18:13:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 74715 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 74715 ']' 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.962 18:13:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.221 [2024-07-11 18:13:12.381319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:26.221 [2024-07-11 18:13:12.381482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74715 ] 00:06:26.221 [2024-07-11 18:13:12.523170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.221 [2024-07-11 18:13:12.558424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.221 [2024-07-11 18:13:12.558461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.163 18:13:13 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.163 18:13:13 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:27.163 18:13:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:27.163 18:13:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=74732 00:06:27.163 18:13:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:27.163 [ 00:06:27.163 "bdev_malloc_delete", 00:06:27.164 "bdev_malloc_create", 00:06:27.164 "bdev_null_resize", 00:06:27.164 "bdev_null_delete", 00:06:27.164 "bdev_null_create", 00:06:27.164 "bdev_nvme_cuse_unregister", 00:06:27.164 "bdev_nvme_cuse_register", 00:06:27.164 "bdev_opal_new_user", 00:06:27.164 "bdev_opal_set_lock_state", 00:06:27.164 "bdev_opal_delete", 00:06:27.164 "bdev_opal_get_info", 00:06:27.164 "bdev_opal_create", 00:06:27.164 "bdev_nvme_opal_revert", 00:06:27.164 "bdev_nvme_opal_init", 00:06:27.164 "bdev_nvme_send_cmd", 00:06:27.164 "bdev_nvme_get_path_iostat", 00:06:27.164 "bdev_nvme_get_mdns_discovery_info", 00:06:27.164 "bdev_nvme_stop_mdns_discovery", 00:06:27.164 "bdev_nvme_start_mdns_discovery", 00:06:27.164 "bdev_nvme_set_multipath_policy", 00:06:27.164 "bdev_nvme_set_preferred_path", 00:06:27.164 "bdev_nvme_get_io_paths", 00:06:27.164 "bdev_nvme_remove_error_injection", 00:06:27.164 "bdev_nvme_add_error_injection", 00:06:27.164 "bdev_nvme_get_discovery_info", 00:06:27.164 "bdev_nvme_stop_discovery", 00:06:27.164 "bdev_nvme_start_discovery", 00:06:27.164 "bdev_nvme_get_controller_health_info", 00:06:27.164 "bdev_nvme_disable_controller", 00:06:27.164 "bdev_nvme_enable_controller", 00:06:27.164 "bdev_nvme_reset_controller", 00:06:27.164 "bdev_nvme_get_transport_statistics", 00:06:27.164 "bdev_nvme_apply_firmware", 00:06:27.164 "bdev_nvme_detach_controller", 00:06:27.164 "bdev_nvme_get_controllers", 00:06:27.164 "bdev_nvme_attach_controller", 00:06:27.164 "bdev_nvme_set_hotplug", 00:06:27.164 "bdev_nvme_set_options", 00:06:27.164 "bdev_passthru_delete", 00:06:27.164 "bdev_passthru_create", 00:06:27.164 "bdev_lvol_set_parent_bdev", 00:06:27.164 "bdev_lvol_set_parent", 00:06:27.164 "bdev_lvol_check_shallow_copy", 00:06:27.164 "bdev_lvol_start_shallow_copy", 00:06:27.164 "bdev_lvol_grow_lvstore", 00:06:27.164 "bdev_lvol_get_lvols", 00:06:27.164 "bdev_lvol_get_lvstores", 00:06:27.164 "bdev_lvol_delete", 00:06:27.164 "bdev_lvol_set_read_only", 00:06:27.164 "bdev_lvol_resize", 00:06:27.164 "bdev_lvol_decouple_parent", 00:06:27.164 "bdev_lvol_inflate", 00:06:27.164 "bdev_lvol_rename", 00:06:27.164 "bdev_lvol_clone_bdev", 00:06:27.164 "bdev_lvol_clone", 00:06:27.164 "bdev_lvol_snapshot", 00:06:27.164 "bdev_lvol_create", 00:06:27.164 "bdev_lvol_delete_lvstore", 00:06:27.164 "bdev_lvol_rename_lvstore", 00:06:27.164 "bdev_lvol_create_lvstore", 00:06:27.164 "bdev_raid_set_options", 00:06:27.164 "bdev_raid_remove_base_bdev", 00:06:27.164 "bdev_raid_add_base_bdev", 00:06:27.164 "bdev_raid_delete", 00:06:27.164 "bdev_raid_create", 00:06:27.164 "bdev_raid_get_bdevs", 00:06:27.164 "bdev_error_inject_error", 00:06:27.164 "bdev_error_delete", 00:06:27.164 "bdev_error_create", 00:06:27.164 "bdev_split_delete", 00:06:27.164 "bdev_split_create", 00:06:27.164 "bdev_delay_delete", 00:06:27.164 "bdev_delay_create", 00:06:27.164 "bdev_delay_update_latency", 00:06:27.164 "bdev_zone_block_delete", 00:06:27.164 "bdev_zone_block_create", 00:06:27.164 "blobfs_create", 00:06:27.164 "blobfs_detect", 00:06:27.164 "blobfs_set_cache_size", 00:06:27.164 "bdev_xnvme_delete", 00:06:27.164 "bdev_xnvme_create", 00:06:27.164 "bdev_aio_delete", 00:06:27.164 "bdev_aio_rescan", 00:06:27.164 "bdev_aio_create", 00:06:27.164 "bdev_ftl_set_property", 00:06:27.164 "bdev_ftl_get_properties", 00:06:27.164 "bdev_ftl_get_stats", 00:06:27.164 "bdev_ftl_unmap", 00:06:27.164 "bdev_ftl_unload", 00:06:27.164 "bdev_ftl_delete", 00:06:27.164 "bdev_ftl_load", 00:06:27.164 "bdev_ftl_create", 00:06:27.164 "bdev_virtio_attach_controller", 00:06:27.164 "bdev_virtio_scsi_get_devices", 00:06:27.164 "bdev_virtio_detach_controller", 00:06:27.164 "bdev_virtio_blk_set_hotplug", 00:06:27.164 "bdev_iscsi_delete", 00:06:27.164 "bdev_iscsi_create", 00:06:27.164 "bdev_iscsi_set_options", 00:06:27.164 "accel_error_inject_error", 00:06:27.164 "ioat_scan_accel_module", 00:06:27.164 "dsa_scan_accel_module", 00:06:27.164 "iaa_scan_accel_module", 00:06:27.164 "keyring_file_remove_key", 00:06:27.164 "keyring_file_add_key", 00:06:27.164 "keyring_linux_set_options", 00:06:27.164 "iscsi_get_histogram", 00:06:27.164 "iscsi_enable_histogram", 00:06:27.164 "iscsi_set_options", 00:06:27.164 "iscsi_get_auth_groups", 00:06:27.164 "iscsi_auth_group_remove_secret", 00:06:27.164 "iscsi_auth_group_add_secret", 00:06:27.164 "iscsi_delete_auth_group", 00:06:27.164 "iscsi_create_auth_group", 00:06:27.164 "iscsi_set_discovery_auth", 00:06:27.164 "iscsi_get_options", 00:06:27.164 "iscsi_target_node_request_logout", 00:06:27.164 "iscsi_target_node_set_redirect", 00:06:27.164 "iscsi_target_node_set_auth", 00:06:27.164 "iscsi_target_node_add_lun", 00:06:27.164 "iscsi_get_stats", 00:06:27.164 "iscsi_get_connections", 00:06:27.164 "iscsi_portal_group_set_auth", 00:06:27.164 "iscsi_start_portal_group", 00:06:27.164 "iscsi_delete_portal_group", 00:06:27.164 "iscsi_create_portal_group", 00:06:27.164 "iscsi_get_portal_groups", 00:06:27.164 "iscsi_delete_target_node", 00:06:27.164 "iscsi_target_node_remove_pg_ig_maps", 00:06:27.164 "iscsi_target_node_add_pg_ig_maps", 00:06:27.164 "iscsi_create_target_node", 00:06:27.164 "iscsi_get_target_nodes", 00:06:27.164 "iscsi_delete_initiator_group", 00:06:27.164 "iscsi_initiator_group_remove_initiators", 00:06:27.164 "iscsi_initiator_group_add_initiators", 00:06:27.164 "iscsi_create_initiator_group", 00:06:27.164 "iscsi_get_initiator_groups", 00:06:27.164 "nvmf_set_crdt", 00:06:27.164 "nvmf_set_config", 00:06:27.164 "nvmf_set_max_subsystems", 00:06:27.164 "nvmf_stop_mdns_prr", 00:06:27.164 "nvmf_publish_mdns_prr", 00:06:27.164 "nvmf_subsystem_get_listeners", 00:06:27.164 "nvmf_subsystem_get_qpairs", 00:06:27.164 "nvmf_subsystem_get_controllers", 00:06:27.164 "nvmf_get_stats", 00:06:27.164 "nvmf_get_transports", 00:06:27.164 "nvmf_create_transport", 00:06:27.164 "nvmf_get_targets", 00:06:27.164 "nvmf_delete_target", 00:06:27.164 "nvmf_create_target", 00:06:27.164 "nvmf_subsystem_allow_any_host", 00:06:27.164 "nvmf_subsystem_remove_host", 00:06:27.164 "nvmf_subsystem_add_host", 00:06:27.164 "nvmf_ns_remove_host", 00:06:27.164 "nvmf_ns_add_host", 00:06:27.164 "nvmf_subsystem_remove_ns", 00:06:27.164 "nvmf_subsystem_add_ns", 00:06:27.164 "nvmf_subsystem_listener_set_ana_state", 00:06:27.164 "nvmf_discovery_get_referrals", 00:06:27.164 "nvmf_discovery_remove_referral", 00:06:27.164 "nvmf_discovery_add_referral", 00:06:27.164 "nvmf_subsystem_remove_listener", 00:06:27.164 "nvmf_subsystem_add_listener", 00:06:27.164 "nvmf_delete_subsystem", 00:06:27.164 "nvmf_create_subsystem", 00:06:27.164 "nvmf_get_subsystems", 00:06:27.164 "env_dpdk_get_mem_stats", 00:06:27.164 "nbd_get_disks", 00:06:27.164 "nbd_stop_disk", 00:06:27.164 "nbd_start_disk", 00:06:27.164 "ublk_recover_disk", 00:06:27.164 "ublk_get_disks", 00:06:27.164 "ublk_stop_disk", 00:06:27.164 "ublk_start_disk", 00:06:27.164 "ublk_destroy_target", 00:06:27.164 "ublk_create_target", 00:06:27.164 "virtio_blk_create_transport", 00:06:27.164 "virtio_blk_get_transports", 00:06:27.164 "vhost_controller_set_coalescing", 00:06:27.164 "vhost_get_controllers", 00:06:27.164 "vhost_delete_controller", 00:06:27.164 "vhost_create_blk_controller", 00:06:27.164 "vhost_scsi_controller_remove_target", 00:06:27.164 "vhost_scsi_controller_add_target", 00:06:27.164 "vhost_start_scsi_controller", 00:06:27.164 "vhost_create_scsi_controller", 00:06:27.164 "thread_set_cpumask", 00:06:27.164 "framework_get_governor", 00:06:27.164 "framework_get_scheduler", 00:06:27.164 "framework_set_scheduler", 00:06:27.164 "framework_get_reactors", 00:06:27.164 "thread_get_io_channels", 00:06:27.164 "thread_get_pollers", 00:06:27.164 "thread_get_stats", 00:06:27.164 "framework_monitor_context_switch", 00:06:27.164 "spdk_kill_instance", 00:06:27.164 "log_enable_timestamps", 00:06:27.164 "log_get_flags", 00:06:27.164 "log_clear_flag", 00:06:27.164 "log_set_flag", 00:06:27.164 "log_get_level", 00:06:27.164 "log_set_level", 00:06:27.164 "log_get_print_level", 00:06:27.164 "log_set_print_level", 00:06:27.164 "framework_enable_cpumask_locks", 00:06:27.164 "framework_disable_cpumask_locks", 00:06:27.164 "framework_wait_init", 00:06:27.164 "framework_start_init", 00:06:27.164 "scsi_get_devices", 00:06:27.164 "bdev_get_histogram", 00:06:27.164 "bdev_enable_histogram", 00:06:27.164 "bdev_set_qos_limit", 00:06:27.164 "bdev_set_qd_sampling_period", 00:06:27.164 "bdev_get_bdevs", 00:06:27.164 "bdev_reset_iostat", 00:06:27.164 "bdev_get_iostat", 00:06:27.164 "bdev_examine", 00:06:27.164 "bdev_wait_for_examine", 00:06:27.164 "bdev_set_options", 00:06:27.164 "notify_get_notifications", 00:06:27.164 "notify_get_types", 00:06:27.164 "accel_get_stats", 00:06:27.164 "accel_set_options", 00:06:27.164 "accel_set_driver", 00:06:27.164 "accel_crypto_key_destroy", 00:06:27.164 "accel_crypto_keys_get", 00:06:27.164 "accel_crypto_key_create", 00:06:27.164 "accel_assign_opc", 00:06:27.164 "accel_get_module_info", 00:06:27.164 "accel_get_opc_assignments", 00:06:27.164 "vmd_rescan", 00:06:27.164 "vmd_remove_device", 00:06:27.164 "vmd_enable", 00:06:27.164 "sock_get_default_impl", 00:06:27.164 "sock_set_default_impl", 00:06:27.164 "sock_impl_set_options", 00:06:27.164 "sock_impl_get_options", 00:06:27.164 "iobuf_get_stats", 00:06:27.164 "iobuf_set_options", 00:06:27.164 "framework_get_pci_devices", 00:06:27.164 "framework_get_config", 00:06:27.164 "framework_get_subsystems", 00:06:27.164 "trace_get_info", 00:06:27.164 "trace_get_tpoint_group_mask", 00:06:27.164 "trace_disable_tpoint_group", 00:06:27.164 "trace_enable_tpoint_group", 00:06:27.164 "trace_clear_tpoint_mask", 00:06:27.164 "trace_set_tpoint_mask", 00:06:27.164 "keyring_get_keys", 00:06:27.164 "spdk_get_version", 00:06:27.164 "rpc_get_methods" 00:06:27.164 ] 00:06:27.164 18:13:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:27.164 18:13:13 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.165 18:13:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.423 18:13:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:27.423 18:13:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 74715 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 74715 ']' 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 74715 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74715 00:06:27.423 killing process with pid 74715 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74715' 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 74715 00:06:27.423 18:13:13 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 74715 00:06:27.683 00:06:27.683 real 0m1.750s 00:06:27.683 user 0m3.255s 00:06:27.683 sys 0m0.454s 00:06:27.683 18:13:13 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.683 ************************************ 00:06:27.683 END TEST spdkcli_tcp 00:06:27.683 ************************************ 00:06:27.683 18:13:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.683 18:13:13 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.683 18:13:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:27.683 18:13:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.683 18:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.683 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:06:27.683 ************************************ 00:06:27.683 START TEST dpdk_mem_utility 00:06:27.683 ************************************ 00:06:27.683 18:13:13 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:27.683 * Looking for test storage... 00:06:27.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:27.683 18:13:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:27.683 18:13:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74801 00:06:27.683 18:13:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:27.683 18:13:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74801 00:06:27.683 18:13:14 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 74801 ']' 00:06:27.683 18:13:14 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.683 18:13:14 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.683 18:13:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.683 18:13:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.683 18:13:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 [2024-07-11 18:13:14.110588] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:27.943 [2024-07-11 18:13:14.110775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74801 ] 00:06:27.943 [2024-07-11 18:13:14.249069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.943 [2024-07-11 18:13:14.285669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.881 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.881 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:28.881 18:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:28.881 18:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:28.881 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.881 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.881 { 00:06:28.881 "filename": "/tmp/spdk_mem_dump.txt" 00:06:28.881 } 00:06:28.881 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.881 18:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:28.881 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:28.881 1 heaps totaling size 814.000000 MiB 00:06:28.881 size: 814.000000 MiB heap id: 0 00:06:28.881 end heaps---------- 00:06:28.881 8 mempools totaling size 598.116089 MiB 00:06:28.881 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:28.881 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:28.881 size: 84.521057 MiB name: bdev_io_74801 00:06:28.881 size: 51.011292 MiB name: evtpool_74801 00:06:28.881 size: 50.003479 MiB name: msgpool_74801 00:06:28.881 size: 21.763794 MiB name: PDU_Pool 00:06:28.882 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:28.882 size: 0.026123 MiB name: Session_Pool 00:06:28.882 end mempools------- 00:06:28.882 6 memzones totaling size 4.142822 MiB 00:06:28.882 size: 1.000366 MiB name: RG_ring_0_74801 00:06:28.882 size: 1.000366 MiB name: RG_ring_1_74801 00:06:28.882 size: 1.000366 MiB name: RG_ring_4_74801 00:06:28.882 size: 1.000366 MiB name: RG_ring_5_74801 00:06:28.882 size: 0.125366 MiB name: RG_ring_2_74801 00:06:28.882 size: 0.015991 MiB name: RG_ring_3_74801 00:06:28.882 end memzones------- 00:06:28.882 18:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:28.882 heap id: 0 total size: 814.000000 MiB number of busy elements: 297 number of free elements: 15 00:06:28.882 list of free elements. size: 12.472473 MiB 00:06:28.882 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:28.882 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:28.882 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:28.882 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:28.882 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:28.882 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:28.882 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:28.882 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:28.882 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:28.882 element at address: 0x20001aa00000 with size: 0.569153 MiB 00:06:28.882 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:28.882 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:28.882 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:28.882 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:28.882 element at address: 0x200003a00000 with size: 0.348572 MiB 00:06:28.882 list of standard malloc elements. size: 199.264954 MiB 00:06:28.882 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:28.882 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:28.882 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:28.882 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:28.882 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:28.882 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:28.882 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:28.882 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:28.882 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:28.882 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:28.882 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:28.883 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:28.883 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:28.884 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:28.884 list of memzone associated elements. size: 602.262573 MiB 00:06:28.884 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:28.884 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:28.884 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:28.884 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:28.884 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:28.884 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74801_0 00:06:28.884 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:28.884 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74801_0 00:06:28.884 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:28.884 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74801_0 00:06:28.884 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:28.884 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:28.884 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:28.884 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:28.884 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:28.884 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74801 00:06:28.884 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:28.884 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74801 00:06:28.884 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:28.884 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74801 00:06:28.884 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:28.884 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:28.884 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:28.884 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:28.884 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:28.884 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:28.884 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:28.884 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:28.884 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:28.884 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74801 00:06:28.884 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:28.884 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74801 00:06:28.884 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:28.884 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74801 00:06:28.884 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:28.884 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74801 00:06:28.884 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:28.884 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74801 00:06:28.884 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:28.884 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:28.884 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:28.884 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:28.884 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:28.884 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:28.884 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:28.884 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74801 00:06:28.884 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:28.884 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:28.884 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:28.884 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:28.884 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:28.884 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74801 00:06:28.884 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:28.884 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:28.884 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:28.884 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74801 00:06:28.884 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:28.884 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74801 00:06:28.884 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:28.884 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:28.884 18:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:28.884 18:13:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74801 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 74801 ']' 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 74801 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74801 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.884 killing process with pid 74801 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74801' 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 74801 00:06:28.884 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 74801 00:06:29.143 00:06:29.143 real 0m1.540s 00:06:29.143 user 0m1.741s 00:06:29.143 sys 0m0.364s 00:06:29.143 ************************************ 00:06:29.143 END TEST dpdk_mem_utility 00:06:29.143 ************************************ 00:06:29.143 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.143 18:13:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.143 18:13:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.143 18:13:15 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:29.143 18:13:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.143 18:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.143 18:13:15 -- common/autotest_common.sh@10 -- # set +x 00:06:29.143 ************************************ 00:06:29.143 START TEST event 00:06:29.143 ************************************ 00:06:29.143 18:13:15 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:29.403 * Looking for test storage... 00:06:29.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:29.403 18:13:15 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:29.403 18:13:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.403 18:13:15 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.403 18:13:15 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:29.403 18:13:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.403 18:13:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.403 ************************************ 00:06:29.403 START TEST event_perf 00:06:29.403 ************************************ 00:06:29.403 18:13:15 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.403 Running I/O for 1 seconds...[2024-07-11 18:13:15.668988] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:29.403 [2024-07-11 18:13:15.669216] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74874 ] 00:06:29.403 [2024-07-11 18:13:15.815528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.661 [2024-07-11 18:13:15.850569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.661 Running I/O for 1 seconds...[2024-07-11 18:13:15.850771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.662 [2024-07-11 18:13:15.850791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.662 [2024-07-11 18:13:15.850820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.599 00:06:30.599 lcore 0: 201210 00:06:30.599 lcore 1: 201207 00:06:30.599 lcore 2: 201207 00:06:30.599 lcore 3: 201208 00:06:30.599 done. 00:06:30.599 00:06:30.599 real 0m1.287s 00:06:30.599 user 0m4.072s 00:06:30.599 sys 0m0.096s 00:06:30.599 18:13:16 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.599 ************************************ 00:06:30.599 END TEST event_perf 00:06:30.599 ************************************ 00:06:30.599 18:13:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.599 18:13:16 event -- common/autotest_common.sh@1142 -- # return 0 00:06:30.599 18:13:16 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.599 18:13:16 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.599 18:13:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.599 18:13:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.599 ************************************ 00:06:30.599 START TEST event_reactor 00:06:30.599 ************************************ 00:06:30.599 18:13:16 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.599 [2024-07-11 18:13:16.998400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:30.599 [2024-07-11 18:13:16.998590] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74908 ] 00:06:30.858 [2024-07-11 18:13:17.136627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.858 [2024-07-11 18:13:17.167920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.233 test_start 00:06:32.233 oneshot 00:06:32.233 tick 100 00:06:32.233 tick 100 00:06:32.233 tick 250 00:06:32.233 tick 100 00:06:32.233 tick 100 00:06:32.233 tick 250 00:06:32.233 tick 100 00:06:32.233 tick 500 00:06:32.233 tick 100 00:06:32.233 tick 100 00:06:32.233 tick 250 00:06:32.233 tick 100 00:06:32.233 tick 100 00:06:32.233 test_end 00:06:32.233 00:06:32.233 real 0m1.270s 00:06:32.233 user 0m1.105s 00:06:32.233 sys 0m0.058s 00:06:32.233 18:13:18 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.233 ************************************ 00:06:32.233 END TEST event_reactor 00:06:32.233 ************************************ 00:06:32.233 18:13:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.233 18:13:18 event -- common/autotest_common.sh@1142 -- # return 0 00:06:32.233 18:13:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.233 18:13:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.233 18:13:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.233 18:13:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.233 ************************************ 00:06:32.233 START TEST event_reactor_perf 00:06:32.233 ************************************ 00:06:32.233 18:13:18 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.233 [2024-07-11 18:13:18.328506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:32.233 [2024-07-11 18:13:18.328707] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74944 ] 00:06:32.233 [2024-07-11 18:13:18.475377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.233 [2024-07-11 18:13:18.510635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.168 test_start 00:06:33.168 test_end 00:06:33.169 Performance: 312650 events per second 00:06:33.428 00:06:33.428 real 0m1.288s 00:06:33.428 user 0m1.107s 00:06:33.428 sys 0m0.074s 00:06:33.428 18:13:19 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.428 ************************************ 00:06:33.428 END TEST event_reactor_perf 00:06:33.428 18:13:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.428 ************************************ 00:06:33.428 18:13:19 event -- common/autotest_common.sh@1142 -- # return 0 00:06:33.428 18:13:19 event -- event/event.sh@49 -- # uname -s 00:06:33.428 18:13:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.428 18:13:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.428 18:13:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.428 18:13:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.428 18:13:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.428 ************************************ 00:06:33.428 START TEST event_scheduler 00:06:33.428 ************************************ 00:06:33.428 18:13:19 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:33.428 * Looking for test storage... 00:06:33.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:33.428 18:13:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:33.428 18:13:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75007 00:06:33.428 18:13:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:33.428 18:13:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.428 18:13:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75007 00:06:33.428 18:13:19 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 75007 ']' 00:06:33.428 18:13:19 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.428 18:13:19 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.428 18:13:19 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.428 18:13:19 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.428 18:13:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.428 [2024-07-11 18:13:19.821753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:33.428 [2024-07-11 18:13:19.821952] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 00:06:33.687 [2024-07-11 18:13:19.975004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.687 [2024-07-11 18:13:20.024774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.687 [2024-07-11 18:13:20.024960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.687 [2024-07-11 18:13:20.025125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.687 [2024-07-11 18:13:20.025300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:34.622 18:13:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.622 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.622 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.622 POWER: Cannot set governor of lcore 0 to performance 00:06:34.622 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.622 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.622 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:34.622 POWER: Unable to set Power Management Environment for lcore 0 00:06:34.622 [2024-07-11 18:13:20.787237] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:34.622 [2024-07-11 18:13:20.787262] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:34.622 [2024-07-11 18:13:20.787291] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.622 [2024-07-11 18:13:20.787318] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.622 [2024-07-11 18:13:20.787333] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.622 [2024-07-11 18:13:20.787348] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 [2024-07-11 18:13:20.842609] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 ************************************ 00:06:34.622 START TEST scheduler_create_thread 00:06:34.622 ************************************ 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 2 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 3 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 4 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 5 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 6 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 7 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 8 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 9 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 10 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.622 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.623 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.623 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.623 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.623 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.623 18:13:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.623 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.623 18:13:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.998 18:13:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.256 18:13:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.256 18:13:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.256 18:13:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.256 18:13:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.205 18:13:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.205 00:06:37.205 real 0m2.616s 00:06:37.205 user 0m0.015s 00:06:37.205 sys 0m0.010s 00:06:37.205 18:13:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.205 18:13:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.205 ************************************ 00:06:37.205 END TEST scheduler_create_thread 00:06:37.205 ************************************ 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:37.205 18:13:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:37.205 18:13:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75007 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 75007 ']' 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 75007 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75007 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:37.205 18:13:23 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:37.206 18:13:23 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75007' 00:06:37.206 killing process with pid 75007 00:06:37.206 18:13:23 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 75007 00:06:37.206 18:13:23 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 75007 00:06:37.786 [2024-07-11 18:13:23.951312] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.786 00:06:37.786 real 0m4.512s 00:06:37.786 user 0m8.591s 00:06:37.786 sys 0m0.413s 00:06:37.786 18:13:24 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.786 ************************************ 00:06:37.786 END TEST event_scheduler 00:06:37.786 ************************************ 00:06:37.787 18:13:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.046 18:13:24 event -- common/autotest_common.sh@1142 -- # return 0 00:06:38.046 18:13:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:38.046 18:13:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:38.046 18:13:24 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.046 18:13:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.046 18:13:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.046 ************************************ 00:06:38.046 START TEST app_repeat 00:06:38.046 ************************************ 00:06:38.046 18:13:24 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75104 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:38.046 Process app_repeat pid: 75104 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75104' 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.046 spdk_app_start Round 0 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:38.046 18:13:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75104 /var/tmp/spdk-nbd.sock 00:06:38.046 18:13:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75104 ']' 00:06:38.046 18:13:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.046 18:13:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.046 18:13:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.046 18:13:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.046 18:13:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.046 [2024-07-11 18:13:24.273931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:38.046 [2024-07-11 18:13:24.274753] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75104 ] 00:06:38.046 [2024-07-11 18:13:24.421589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.046 [2024-07-11 18:13:24.458020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.046 [2024-07-11 18:13:24.458107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.982 18:13:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.982 18:13:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:38.982 18:13:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.240 Malloc0 00:06:39.240 18:13:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.240 Malloc1 00:06:39.499 18:13:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.499 18:13:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.500 18:13:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.500 18:13:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.500 18:13:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.500 /dev/nbd0 00:06:39.500 18:13:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.500 18:13:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.500 1+0 records in 00:06:39.500 1+0 records out 00:06:39.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275614 s, 14.9 MB/s 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.500 18:13:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.759 18:13:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.759 18:13:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:39.759 18:13:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.759 18:13:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.759 18:13:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.759 /dev/nbd1 00:06:39.759 18:13:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.759 18:13:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.759 1+0 records in 00:06:39.759 1+0 records out 00:06:39.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386388 s, 10.6 MB/s 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:39.759 18:13:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.018 18:13:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.018 18:13:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:40.018 18:13:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.018 18:13:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.018 18:13:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.018 18:13:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.018 18:13:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.278 { 00:06:40.278 "nbd_device": "/dev/nbd0", 00:06:40.278 "bdev_name": "Malloc0" 00:06:40.278 }, 00:06:40.278 { 00:06:40.278 "nbd_device": "/dev/nbd1", 00:06:40.278 "bdev_name": "Malloc1" 00:06:40.278 } 00:06:40.278 ]' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.278 { 00:06:40.278 "nbd_device": "/dev/nbd0", 00:06:40.278 "bdev_name": "Malloc0" 00:06:40.278 }, 00:06:40.278 { 00:06:40.278 "nbd_device": "/dev/nbd1", 00:06:40.278 "bdev_name": "Malloc1" 00:06:40.278 } 00:06:40.278 ]' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.278 /dev/nbd1' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.278 /dev/nbd1' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.278 256+0 records in 00:06:40.278 256+0 records out 00:06:40.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00936093 s, 112 MB/s 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.278 256+0 records in 00:06:40.278 256+0 records out 00:06:40.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226538 s, 46.3 MB/s 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.278 256+0 records in 00:06:40.278 256+0 records out 00:06:40.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278382 s, 37.7 MB/s 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.278 18:13:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.537 18:13:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.800 18:13:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.069 18:13:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.069 18:13:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.331 18:13:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.590 [2024-07-11 18:13:27.853491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.590 [2024-07-11 18:13:27.885657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.590 [2024-07-11 18:13:27.885657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.590 [2024-07-11 18:13:27.914794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.590 [2024-07-11 18:13:27.914871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.878 18:13:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.878 18:13:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:44.878 spdk_app_start Round 1 00:06:44.878 18:13:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75104 /var/tmp/spdk-nbd.sock 00:06:44.878 18:13:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75104 ']' 00:06:44.878 18:13:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.878 18:13:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.878 18:13:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.878 18:13:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.878 18:13:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.878 18:13:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.878 18:13:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:44.878 18:13:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.878 Malloc0 00:06:44.878 18:13:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.137 Malloc1 00:06:45.137 18:13:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.137 18:13:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.396 /dev/nbd0 00:06:45.396 18:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.396 18:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.396 18:13:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:45.396 18:13:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:45.396 18:13:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.396 18:13:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.396 18:13:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.397 1+0 records in 00:06:45.397 1+0 records out 00:06:45.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325005 s, 12.6 MB/s 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.397 18:13:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:45.397 18:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.397 18:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.397 18:13:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.656 /dev/nbd1 00:06:45.656 18:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.656 18:13:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.656 1+0 records in 00:06:45.656 1+0 records out 00:06:45.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374021 s, 11.0 MB/s 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.656 18:13:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:45.656 18:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.656 18:13:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.656 18:13:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.656 18:13:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.656 18:13:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.915 { 00:06:45.915 "nbd_device": "/dev/nbd0", 00:06:45.915 "bdev_name": "Malloc0" 00:06:45.915 }, 00:06:45.915 { 00:06:45.915 "nbd_device": "/dev/nbd1", 00:06:45.915 "bdev_name": "Malloc1" 00:06:45.915 } 00:06:45.915 ]' 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.915 { 00:06:45.915 "nbd_device": "/dev/nbd0", 00:06:45.915 "bdev_name": "Malloc0" 00:06:45.915 }, 00:06:45.915 { 00:06:45.915 "nbd_device": "/dev/nbd1", 00:06:45.915 "bdev_name": "Malloc1" 00:06:45.915 } 00:06:45.915 ]' 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.915 /dev/nbd1' 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.915 /dev/nbd1' 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.915 256+0 records in 00:06:45.915 256+0 records out 00:06:45.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00914656 s, 115 MB/s 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.915 18:13:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.175 256+0 records in 00:06:46.175 256+0 records out 00:06:46.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206413 s, 50.8 MB/s 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.175 256+0 records in 00:06:46.175 256+0 records out 00:06:46.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266131 s, 39.4 MB/s 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.175 18:13:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.434 18:13:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.694 18:13:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.974 18:13:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.974 18:13:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.233 18:13:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.233 [2024-07-11 18:13:33.545796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.233 [2024-07-11 18:13:33.579154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.233 [2024-07-11 18:13:33.579155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.233 [2024-07-11 18:13:33.608177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.233 [2024-07-11 18:13:33.608255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.511 18:13:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.511 spdk_app_start Round 2 00:06:50.511 18:13:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.511 18:13:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75104 /var/tmp/spdk-nbd.sock 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75104 ']' 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.511 18:13:36 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:50.511 18:13:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.511 Malloc0 00:06:50.511 18:13:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.078 Malloc1 00:06:51.078 18:13:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.078 /dev/nbd0 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.078 18:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.078 18:13:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.078 1+0 records in 00:06:51.078 1+0 records out 00:06:51.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284903 s, 14.4 MB/s 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.343 /dev/nbd1 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.343 1+0 records in 00:06:51.343 1+0 records out 00:06:51.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039032 s, 10.5 MB/s 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.343 18:13:37 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.343 18:13:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.614 18:13:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.614 { 00:06:51.614 "nbd_device": "/dev/nbd0", 00:06:51.614 "bdev_name": "Malloc0" 00:06:51.614 }, 00:06:51.614 { 00:06:51.614 "nbd_device": "/dev/nbd1", 00:06:51.614 "bdev_name": "Malloc1" 00:06:51.614 } 00:06:51.614 ]' 00:06:51.614 18:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.614 { 00:06:51.614 "nbd_device": "/dev/nbd0", 00:06:51.614 "bdev_name": "Malloc0" 00:06:51.614 }, 00:06:51.614 { 00:06:51.614 "nbd_device": "/dev/nbd1", 00:06:51.614 "bdev_name": "Malloc1" 00:06:51.614 } 00:06:51.614 ]' 00:06:51.614 18:13:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.872 /dev/nbd1' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.872 /dev/nbd1' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.872 256+0 records in 00:06:51.872 256+0 records out 00:06:51.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104035 s, 101 MB/s 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.872 256+0 records in 00:06:51.872 256+0 records out 00:06:51.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244617 s, 42.9 MB/s 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.872 256+0 records in 00:06:51.872 256+0 records out 00:06:51.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027659 s, 37.9 MB/s 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.872 18:13:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.873 18:13:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.131 18:13:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.390 18:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.648 18:13:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.648 18:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.648 18:13:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.648 18:13:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.648 18:13:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.907 18:13:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.166 [2024-07-11 18:13:39.396292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.166 [2024-07-11 18:13:39.431274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.166 [2024-07-11 18:13:39.431282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.166 [2024-07-11 18:13:39.460726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.166 [2024-07-11 18:13:39.460817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.453 18:13:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75104 /var/tmp/spdk-nbd.sock 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75104 ']' 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:56.453 18:13:42 event.app_repeat -- event/event.sh@39 -- # killprocess 75104 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 75104 ']' 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 75104 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75104 00:06:56.453 killing process with pid 75104 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75104' 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@967 -- # kill 75104 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@972 -- # wait 75104 00:06:56.453 spdk_app_start is called in Round 0. 00:06:56.453 Shutdown signal received, stop current app iteration 00:06:56.453 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:56.453 spdk_app_start is called in Round 1. 00:06:56.453 Shutdown signal received, stop current app iteration 00:06:56.453 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:56.453 spdk_app_start is called in Round 2. 00:06:56.453 Shutdown signal received, stop current app iteration 00:06:56.453 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:56.453 spdk_app_start is called in Round 3. 00:06:56.453 Shutdown signal received, stop current app iteration 00:06:56.453 ************************************ 00:06:56.453 END TEST app_repeat 00:06:56.453 ************************************ 00:06:56.453 18:13:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:56.453 18:13:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:56.453 00:06:56.453 real 0m18.492s 00:06:56.453 user 0m41.852s 00:06:56.453 sys 0m2.503s 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.453 18:13:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 18:13:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:56.453 18:13:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:56.453 18:13:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.453 18:13:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.453 18:13:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.453 18:13:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 ************************************ 00:06:56.453 START TEST cpu_locks 00:06:56.453 ************************************ 00:06:56.453 18:13:42 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.453 * Looking for test storage... 00:06:56.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:56.453 18:13:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.453 18:13:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.453 18:13:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.453 18:13:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.453 18:13:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.453 18:13:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.453 18:13:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.453 ************************************ 00:06:56.453 START TEST default_locks 00:06:56.453 ************************************ 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75543 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75543 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75543 ']' 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.712 18:13:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.712 [2024-07-11 18:13:42.962089] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:56.712 [2024-07-11 18:13:42.962303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75543 ] 00:06:56.712 [2024-07-11 18:13:43.100838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.970 [2024-07-11 18:13:43.133732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.536 18:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.536 18:13:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:57.536 18:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75543 00:06:57.536 18:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75543 00:06:57.536 18:13:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75543 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 75543 ']' 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 75543 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75543 00:06:57.794 killing process with pid 75543 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75543' 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 75543 00:06:57.794 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 75543 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75543 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75543 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75543 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75543 ']' 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.052 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.053 ERROR: process (pid: 75543) is no longer running 00:06:58.053 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75543) - No such process 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.053 00:06:58.053 real 0m1.576s 00:06:58.053 user 0m1.703s 00:06:58.053 sys 0m0.440s 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.053 18:13:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.053 ************************************ 00:06:58.053 END TEST default_locks 00:06:58.053 ************************************ 00:06:58.311 18:13:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:58.311 18:13:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:58.311 18:13:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.311 18:13:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.311 18:13:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.311 ************************************ 00:06:58.311 START TEST default_locks_via_rpc 00:06:58.311 ************************************ 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75590 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75590 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75590 ']' 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.311 18:13:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.311 [2024-07-11 18:13:44.607484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:58.311 [2024-07-11 18:13:44.607690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75590 ] 00:06:58.569 [2024-07-11 18:13:44.754873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.569 [2024-07-11 18:13:44.788038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75590 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75590 00:06:59.135 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75590 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 75590 ']' 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 75590 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75590 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.702 killing process with pid 75590 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75590' 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 75590 00:06:59.702 18:13:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 75590 00:06:59.960 00:06:59.960 real 0m1.713s 00:06:59.960 user 0m1.850s 00:06:59.960 sys 0m0.502s 00:06:59.960 18:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.960 18:13:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.960 ************************************ 00:06:59.960 END TEST default_locks_via_rpc 00:06:59.960 ************************************ 00:06:59.960 18:13:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.960 18:13:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.960 18:13:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.960 18:13:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.960 18:13:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.960 ************************************ 00:06:59.960 START TEST non_locking_app_on_locked_coremask 00:06:59.960 ************************************ 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75637 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75637 /var/tmp/spdk.sock 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75637 ']' 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.960 18:13:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.960 [2024-07-11 18:13:46.340036] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:59.960 [2024-07-11 18:13:46.340232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75637 ] 00:07:00.219 [2024-07-11 18:13:46.480947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.219 [2024-07-11 18:13:46.513027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75653 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75653 /var/tmp/spdk2.sock 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75653 ']' 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.155 18:13:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.155 [2024-07-11 18:13:47.374523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:01.155 [2024-07-11 18:13:47.374721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75653 ] 00:07:01.155 [2024-07-11 18:13:47.533044] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.155 [2024-07-11 18:13:47.533154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.414 [2024-07-11 18:13:47.605434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.980 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.980 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:01.980 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75637 00:07:01.980 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75637 00:07:01.980 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75637 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75637 ']' 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75637 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75637 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.546 killing process with pid 75637 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75637' 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75637 00:07:02.546 18:13:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75637 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75653 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75653 ']' 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75653 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75653 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.113 killing process with pid 75653 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75653' 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75653 00:07:03.113 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75653 00:07:03.372 00:07:03.372 real 0m3.445s 00:07:03.372 user 0m3.970s 00:07:03.372 sys 0m0.891s 00:07:03.372 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.372 18:13:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 ************************************ 00:07:03.372 END TEST non_locking_app_on_locked_coremask 00:07:03.372 ************************************ 00:07:03.372 18:13:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:03.372 18:13:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:03.372 18:13:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.372 18:13:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.372 18:13:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 ************************************ 00:07:03.372 START TEST locking_app_on_unlocked_coremask 00:07:03.372 ************************************ 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75716 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75716 /var/tmp/spdk.sock 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75716 ']' 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.372 18:13:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.631 [2024-07-11 18:13:49.888996] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:03.631 [2024-07-11 18:13:49.889199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75716 ] 00:07:03.631 [2024-07-11 18:13:50.036515] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.631 [2024-07-11 18:13:50.036610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.890 [2024-07-11 18:13:50.070950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75732 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75732 /var/tmp/spdk2.sock 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75732 ']' 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.458 18:13:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.717 [2024-07-11 18:13:50.949351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:04.717 [2024-07-11 18:13:50.949557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75732 ] 00:07:04.717 [2024-07-11 18:13:51.111504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.986 [2024-07-11 18:13:51.185457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.569 18:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.569 18:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:05.569 18:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75732 00:07:05.569 18:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75732 00:07:05.569 18:13:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75716 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75716 ']' 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75716 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75716 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.507 killing process with pid 75716 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75716' 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75716 00:07:06.507 18:13:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75716 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75732 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75732 ']' 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75732 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75732 00:07:07.080 killing process with pid 75732 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75732' 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75732 00:07:07.080 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75732 00:07:07.338 00:07:07.338 real 0m3.833s 00:07:07.338 user 0m4.478s 00:07:07.338 sys 0m1.034s 00:07:07.338 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.339 ************************************ 00:07:07.339 END TEST locking_app_on_unlocked_coremask 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.339 ************************************ 00:07:07.339 18:13:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:07.339 18:13:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:07.339 18:13:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.339 18:13:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.339 18:13:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.339 ************************************ 00:07:07.339 START TEST locking_app_on_locked_coremask 00:07:07.339 ************************************ 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75796 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75796 /var/tmp/spdk.sock 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75796 ']' 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.339 18:13:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.597 [2024-07-11 18:13:53.753205] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:07.597 [2024-07-11 18:13:53.753445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75796 ] 00:07:07.597 [2024-07-11 18:13:53.901298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.597 [2024-07-11 18:13:53.935143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75812 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75812 /var/tmp/spdk2.sock 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75812 /var/tmp/spdk2.sock 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75812 /var/tmp/spdk2.sock 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75812 ']' 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.533 18:13:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.533 [2024-07-11 18:13:54.759130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:08.533 [2024-07-11 18:13:54.759557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75812 ] 00:07:08.533 [2024-07-11 18:13:54.904970] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75796 has claimed it. 00:07:08.533 [2024-07-11 18:13:54.905071] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:09.101 ERROR: process (pid: 75812) is no longer running 00:07:09.101 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75812) - No such process 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75796 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75796 00:07:09.101 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75796 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75796 ']' 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75796 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75796 00:07:09.360 killing process with pid 75796 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75796' 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75796 00:07:09.360 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75796 00:07:09.619 00:07:09.619 real 0m2.350s 00:07:09.619 user 0m2.727s 00:07:09.619 sys 0m0.578s 00:07:09.619 ************************************ 00:07:09.619 END TEST locking_app_on_locked_coremask 00:07:09.619 ************************************ 00:07:09.619 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.619 18:13:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.878 18:13:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.878 18:13:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:09.878 18:13:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.879 18:13:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.879 18:13:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.879 ************************************ 00:07:09.879 START TEST locking_overlapped_coremask 00:07:09.879 ************************************ 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75859 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75859 /var/tmp/spdk.sock 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75859 ']' 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.879 18:13:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.879 [2024-07-11 18:13:56.130611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:09.879 [2024-07-11 18:13:56.130735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75859 ] 00:07:09.879 [2024-07-11 18:13:56.270310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.137 [2024-07-11 18:13:56.304297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.137 [2024-07-11 18:13:56.304368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.137 [2024-07-11 18:13:56.304451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75877 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75877 /var/tmp/spdk2.sock 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75877 /var/tmp/spdk2.sock 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75877 /var/tmp/spdk2.sock 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75877 ']' 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.706 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.706 [2024-07-11 18:13:57.093442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:10.706 [2024-07-11 18:13:57.093939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75877 ] 00:07:10.965 [2024-07-11 18:13:57.243865] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75859 has claimed it. 00:07:10.965 [2024-07-11 18:13:57.243969] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.533 ERROR: process (pid: 75877) is no longer running 00:07:11.533 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75877) - No such process 00:07:11.533 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.533 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:11.533 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:11.533 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.533 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.533 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.533 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75859 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 75859 ']' 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 75859 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75859 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75859' 00:07:11.534 killing process with pid 75859 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 75859 00:07:11.534 18:13:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 75859 00:07:11.793 00:07:11.793 real 0m2.054s 00:07:11.793 user 0m5.750s 00:07:11.793 sys 0m0.402s 00:07:11.793 18:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.793 ************************************ 00:07:11.793 END TEST locking_overlapped_coremask 00:07:11.793 ************************************ 00:07:11.793 18:13:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.793 18:13:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.793 18:13:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:11.793 18:13:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.793 18:13:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.793 18:13:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.793 ************************************ 00:07:11.793 START TEST locking_overlapped_coremask_via_rpc 00:07:11.793 ************************************ 00:07:11.793 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:11.793 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75919 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75919 /var/tmp/spdk.sock 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75919 ']' 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.794 18:13:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.053 [2024-07-11 18:13:58.271298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:12.053 [2024-07-11 18:13:58.271491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75919 ] 00:07:12.053 [2024-07-11 18:13:58.422981] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.053 [2024-07-11 18:13:58.423032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.311 [2024-07-11 18:13:58.469674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.311 [2024-07-11 18:13:58.469816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.311 [2024-07-11 18:13:58.469845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75937 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75937 /var/tmp/spdk2.sock 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75937 ']' 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.879 18:13:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.879 [2024-07-11 18:13:59.280598] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:12.879 [2024-07-11 18:13:59.281304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75937 ] 00:07:13.137 [2024-07-11 18:13:59.441572] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.137 [2024-07-11 18:13:59.441682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.137 [2024-07-11 18:13:59.527405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.137 [2024-07-11 18:13:59.527481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.137 [2024-07-11 18:13:59.527613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.074 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.075 [2024-07-11 18:14:00.171334] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75919 has claimed it. 00:07:14.075 request: 00:07:14.075 { 00:07:14.075 "method": "framework_enable_cpumask_locks", 00:07:14.075 "req_id": 1 00:07:14.075 } 00:07:14.075 Got JSON-RPC error response 00:07:14.075 response: 00:07:14.075 { 00:07:14.075 "code": -32603, 00:07:14.075 "message": "Failed to claim CPU core: 2" 00:07:14.075 } 00:07:14.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75919 /var/tmp/spdk.sock 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75919 ']' 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75937 /var/tmp/spdk2.sock 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75937 ']' 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.075 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.333 ************************************ 00:07:14.333 END TEST locking_overlapped_coremask_via_rpc 00:07:14.333 ************************************ 00:07:14.333 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.334 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:14.334 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:14.334 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.334 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.334 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.334 00:07:14.334 real 0m2.513s 00:07:14.334 user 0m1.239s 00:07:14.334 sys 0m0.193s 00:07:14.334 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.334 18:14:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.334 18:14:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:14.334 18:14:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75919 ]] 00:07:14.334 18:14:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75919 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75919 ']' 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75919 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75919 00:07:14.334 killing process with pid 75919 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75919' 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 75919 00:07:14.334 18:14:00 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 75919 00:07:14.902 18:14:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75937 ]] 00:07:14.902 18:14:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75937 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75937 ']' 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75937 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75937 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:14.902 killing process with pid 75937 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75937' 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 75937 00:07:14.902 18:14:01 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 75937 00:07:15.162 18:14:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:15.162 Process with pid 75919 is not found 00:07:15.162 18:14:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:15.162 18:14:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75919 ]] 00:07:15.162 18:14:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75919 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75919 ']' 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75919 00:07:15.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75919) - No such process 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 75919 is not found' 00:07:15.162 18:14:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75937 ]] 00:07:15.162 18:14:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75937 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75937 ']' 00:07:15.162 Process with pid 75937 is not found 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75937 00:07:15.162 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75937) - No such process 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 75937 is not found' 00:07:15.162 18:14:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:15.162 00:07:15.162 real 0m18.598s 00:07:15.162 user 0m32.951s 00:07:15.162 sys 0m4.860s 00:07:15.162 ************************************ 00:07:15.162 END TEST cpu_locks 00:07:15.162 ************************************ 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.162 18:14:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.162 18:14:01 event -- common/autotest_common.sh@1142 -- # return 0 00:07:15.162 ************************************ 00:07:15.162 END TEST event 00:07:15.162 ************************************ 00:07:15.162 00:07:15.162 real 0m45.861s 00:07:15.162 user 1m29.808s 00:07:15.162 sys 0m8.255s 00:07:15.162 18:14:01 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.162 18:14:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.162 18:14:01 -- common/autotest_common.sh@1142 -- # return 0 00:07:15.162 18:14:01 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:15.162 18:14:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.162 18:14:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.162 18:14:01 -- common/autotest_common.sh@10 -- # set +x 00:07:15.162 ************************************ 00:07:15.162 START TEST thread 00:07:15.162 ************************************ 00:07:15.162 18:14:01 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:15.162 * Looking for test storage... 00:07:15.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:15.162 18:14:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:15.162 18:14:01 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:15.162 18:14:01 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.162 18:14:01 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.162 ************************************ 00:07:15.162 START TEST thread_poller_perf 00:07:15.162 ************************************ 00:07:15.162 18:14:01 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:15.420 [2024-07-11 18:14:01.587398] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:15.420 [2024-07-11 18:14:01.587626] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76062 ] 00:07:15.420 [2024-07-11 18:14:01.739673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.420 [2024-07-11 18:14:01.782375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.420 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:16.803 ====================================== 00:07:16.803 busy:2211020800 (cyc) 00:07:16.803 total_run_count: 318000 00:07:16.803 tsc_hz: 2200000000 (cyc) 00:07:16.803 ====================================== 00:07:16.803 poller_cost: 6952 (cyc), 3160 (nsec) 00:07:16.803 00:07:16.803 real 0m1.314s 00:07:16.803 user 0m1.130s 00:07:16.803 sys 0m0.075s 00:07:16.803 18:14:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.803 18:14:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:16.803 ************************************ 00:07:16.803 END TEST thread_poller_perf 00:07:16.803 ************************************ 00:07:16.803 18:14:02 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:16.803 18:14:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.803 18:14:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:16.803 18:14:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.803 18:14:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.803 ************************************ 00:07:16.803 START TEST thread_poller_perf 00:07:16.803 ************************************ 00:07:16.803 18:14:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:16.803 [2024-07-11 18:14:02.954553] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:16.803 [2024-07-11 18:14:02.954749] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76093 ] 00:07:16.803 [2024-07-11 18:14:03.102524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.803 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:16.803 [2024-07-11 18:14:03.140258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.180 ====================================== 00:07:18.180 busy:2204197010 (cyc) 00:07:18.180 total_run_count: 4249000 00:07:18.180 tsc_hz: 2200000000 (cyc) 00:07:18.180 ====================================== 00:07:18.180 poller_cost: 518 (cyc), 235 (nsec) 00:07:18.180 ************************************ 00:07:18.180 END TEST thread_poller_perf 00:07:18.180 ************************************ 00:07:18.180 00:07:18.180 real 0m1.302s 00:07:18.180 user 0m1.118s 00:07:18.180 sys 0m0.077s 00:07:18.180 18:14:04 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.180 18:14:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 18:14:04 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:18.180 18:14:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:18.180 ************************************ 00:07:18.180 END TEST thread 00:07:18.180 ************************************ 00:07:18.180 00:07:18.180 real 0m2.815s 00:07:18.180 user 0m2.319s 00:07:18.180 sys 0m0.270s 00:07:18.180 18:14:04 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.180 18:14:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 18:14:04 -- common/autotest_common.sh@1142 -- # return 0 00:07:18.180 18:14:04 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:18.180 18:14:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.180 18:14:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.180 18:14:04 -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 ************************************ 00:07:18.180 START TEST accel 00:07:18.180 ************************************ 00:07:18.180 18:14:04 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:18.180 * Looking for test storage... 00:07:18.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:18.180 18:14:04 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:18.180 18:14:04 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:18.180 18:14:04 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:18.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.180 18:14:04 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76174 00:07:18.180 18:14:04 accel -- accel/accel.sh@63 -- # waitforlisten 76174 00:07:18.180 18:14:04 accel -- common/autotest_common.sh@829 -- # '[' -z 76174 ']' 00:07:18.180 18:14:04 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.180 18:14:04 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.180 18:14:04 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.180 18:14:04 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.180 18:14:04 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:18.180 18:14:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 18:14:04 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:18.180 18:14:04 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.180 18:14:04 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.180 18:14:04 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.180 18:14:04 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.180 18:14:04 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.180 18:14:04 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:18.180 18:14:04 accel -- accel/accel.sh@41 -- # jq -r . 00:07:18.180 [2024-07-11 18:14:04.529942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:18.180 [2024-07-11 18:14:04.530146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76174 ] 00:07:18.439 [2024-07-11 18:14:04.678410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.439 [2024-07-11 18:14:04.713690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@862 -- # return 0 00:07:19.396 18:14:05 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:19.396 18:14:05 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:19.396 18:14:05 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:19.396 18:14:05 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:19.396 18:14:05 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:19.396 18:14:05 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.396 18:14:05 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # IFS== 00:07:19.396 18:14:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:19.396 18:14:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:19.396 18:14:05 accel -- accel/accel.sh@75 -- # killprocess 76174 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@948 -- # '[' -z 76174 ']' 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@952 -- # kill -0 76174 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@953 -- # uname 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76174 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76174' 00:07:19.396 killing process with pid 76174 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@967 -- # kill 76174 00:07:19.396 18:14:05 accel -- common/autotest_common.sh@972 -- # wait 76174 00:07:19.665 18:14:05 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:19.665 18:14:05 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:19.665 18:14:05 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:19.665 18:14:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.665 18:14:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.665 18:14:05 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.665 18:14:05 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:19.666 18:14:05 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:19.666 18:14:05 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.666 18:14:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:19.666 18:14:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.666 18:14:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:19.666 18:14:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.666 18:14:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.666 18:14:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.666 ************************************ 00:07:19.666 START TEST accel_missing_filename 00:07:19.666 ************************************ 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.666 18:14:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:19.666 18:14:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:19.666 [2024-07-11 18:14:06.006341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:19.666 [2024-07-11 18:14:06.006516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76222 ] 00:07:19.925 [2024-07-11 18:14:06.151063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.925 [2024-07-11 18:14:06.187633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.925 [2024-07-11 18:14:06.221621] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.925 [2024-07-11 18:14:06.271220] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:20.184 A filename is required. 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.184 00:07:20.184 real 0m0.389s 00:07:20.184 user 0m0.221s 00:07:20.184 sys 0m0.118s 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.184 ************************************ 00:07:20.184 END TEST accel_missing_filename 00:07:20.184 ************************************ 00:07:20.184 18:14:06 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:20.184 18:14:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.184 18:14:06 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.184 18:14:06 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:20.184 18:14:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.184 18:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.184 ************************************ 00:07:20.184 START TEST accel_compress_verify 00:07:20.184 ************************************ 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.184 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:20.184 18:14:06 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:20.184 [2024-07-11 18:14:06.441554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:20.184 [2024-07-11 18:14:06.441724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76253 ] 00:07:20.184 [2024-07-11 18:14:06.588729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.443 [2024-07-11 18:14:06.623713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.443 [2024-07-11 18:14:06.654991] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:20.443 [2024-07-11 18:14:06.700834] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:20.443 00:07:20.443 Compression does not support the verify option, aborting. 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.443 00:07:20.443 real 0m0.377s 00:07:20.443 user 0m0.231s 00:07:20.443 sys 0m0.095s 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.443 ************************************ 00:07:20.443 END TEST accel_compress_verify 00:07:20.443 ************************************ 00:07:20.443 18:14:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:20.443 18:14:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.443 18:14:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:20.443 18:14:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:20.443 18:14:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.443 18:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.443 ************************************ 00:07:20.443 START TEST accel_wrong_workload 00:07:20.443 ************************************ 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.443 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:20.443 18:14:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:20.702 Unsupported workload type: foobar 00:07:20.702 [2024-07-11 18:14:06.859175] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:20.702 accel_perf options: 00:07:20.702 [-h help message] 00:07:20.702 [-q queue depth per core] 00:07:20.702 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.702 [-T number of threads per core 00:07:20.702 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.702 [-t time in seconds] 00:07:20.702 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.702 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:20.703 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.703 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.703 [-S for crc32c workload, use this seed value (default 0) 00:07:20.703 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.703 [-f for fill workload, use this BYTE value (default 255) 00:07:20.703 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.703 [-y verify result if this switch is on] 00:07:20.703 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.703 Can be used to spread operations across a wider range of memory. 00:07:20.703 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:20.703 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.703 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.703 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.703 00:07:20.703 real 0m0.066s 00:07:20.703 user 0m0.074s 00:07:20.703 sys 0m0.036s 00:07:20.703 ************************************ 00:07:20.703 END TEST accel_wrong_workload 00:07:20.703 ************************************ 00:07:20.703 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.703 18:14:06 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:20.703 18:14:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.703 18:14:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.703 18:14:06 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:20.703 18:14:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.703 18:14:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.703 ************************************ 00:07:20.703 START TEST accel_negative_buffers 00:07:20.703 ************************************ 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.703 18:14:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:20.703 18:14:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:20.703 -x option must be non-negative. 00:07:20.703 [2024-07-11 18:14:06.975853] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:20.703 accel_perf options: 00:07:20.703 [-h help message] 00:07:20.703 [-q queue depth per core] 00:07:20.703 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:20.703 [-T number of threads per core 00:07:20.703 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:20.703 [-t time in seconds] 00:07:20.703 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:20.703 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:20.703 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:20.703 [-l for compress/decompress workloads, name of uncompressed input file 00:07:20.703 [-S for crc32c workload, use this seed value (default 0) 00:07:20.703 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:20.703 [-f for fill workload, use this BYTE value (default 255) 00:07:20.703 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:20.703 [-y verify result if this switch is on] 00:07:20.703 [-a tasks to allocate per core (default: same value as -q)] 00:07:20.703 Can be used to spread operations across a wider range of memory. 00:07:20.703 ************************************ 00:07:20.703 END TEST accel_negative_buffers 00:07:20.703 ************************************ 00:07:20.703 18:14:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:20.703 18:14:07 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.703 18:14:07 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.703 18:14:07 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.703 00:07:20.703 real 0m0.065s 00:07:20.703 user 0m0.084s 00:07:20.703 sys 0m0.031s 00:07:20.703 18:14:07 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.703 18:14:07 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:20.703 18:14:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.703 18:14:07 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:20.703 18:14:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:20.703 18:14:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.703 18:14:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.703 ************************************ 00:07:20.703 START TEST accel_crc32c 00:07:20.703 ************************************ 00:07:20.703 18:14:07 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:20.703 18:14:07 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:20.703 [2024-07-11 18:14:07.101731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:20.703 [2024-07-11 18:14:07.101976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76309 ] 00:07:20.962 [2024-07-11 18:14:07.251457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.962 [2024-07-11 18:14:07.290857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:20.962 18:14:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.340 18:14:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.341 18:14:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:22.341 18:14:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.341 00:07:22.341 real 0m1.408s 00:07:22.341 user 0m0.016s 00:07:22.341 sys 0m0.003s 00:07:22.341 18:14:08 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.341 ************************************ 00:07:22.341 END TEST accel_crc32c 00:07:22.341 ************************************ 00:07:22.341 18:14:08 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:22.341 18:14:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.341 18:14:08 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:22.341 18:14:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:22.341 18:14:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.341 18:14:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.341 ************************************ 00:07:22.341 START TEST accel_crc32c_C2 00:07:22.341 ************************************ 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:22.341 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:22.341 [2024-07-11 18:14:08.550969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:22.341 [2024-07-11 18:14:08.551265] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76350 ] 00:07:22.341 [2024-07-11 18:14:08.698367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.341 [2024-07-11 18:14:08.733822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:22.600 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:22.601 18:14:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.539 00:07:23.539 real 0m1.383s 00:07:23.539 user 0m0.014s 00:07:23.539 sys 0m0.004s 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.539 18:14:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:23.539 ************************************ 00:07:23.539 END TEST accel_crc32c_C2 00:07:23.539 ************************************ 00:07:23.539 18:14:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.539 18:14:09 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:23.539 18:14:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:23.539 18:14:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.539 18:14:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.539 ************************************ 00:07:23.539 START TEST accel_copy 00:07:23.539 ************************************ 00:07:23.539 18:14:09 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:23.539 18:14:09 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:23.798 [2024-07-11 18:14:09.982284] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:23.798 [2024-07-11 18:14:09.982466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76380 ] 00:07:23.798 [2024-07-11 18:14:10.129607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.798 [2024-07-11 18:14:10.168006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.798 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.057 18:14:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:24.994 18:14:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.994 00:07:24.994 real 0m1.393s 00:07:24.994 user 0m0.014s 00:07:24.994 sys 0m0.005s 00:07:24.994 18:14:11 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.994 ************************************ 00:07:24.994 END TEST accel_copy 00:07:24.994 ************************************ 00:07:24.994 18:14:11 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:24.994 18:14:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.994 18:14:11 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.994 18:14:11 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:24.994 18:14:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.994 18:14:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.994 ************************************ 00:07:24.994 START TEST accel_fill 00:07:24.994 ************************************ 00:07:24.994 18:14:11 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:24.994 18:14:11 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:25.253 [2024-07-11 18:14:11.437906] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:25.253 [2024-07-11 18:14:11.438142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76410 ] 00:07:25.253 [2024-07-11 18:14:11.585366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.253 [2024-07-11 18:14:11.618727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:25.253 18:14:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:25.254 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:25.513 18:14:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:26.449 18:14:12 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.449 00:07:26.449 real 0m1.379s 00:07:26.449 user 0m1.166s 00:07:26.449 sys 0m0.123s 00:07:26.449 ************************************ 00:07:26.449 END TEST accel_fill 00:07:26.449 ************************************ 00:07:26.449 18:14:12 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.449 18:14:12 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:26.449 18:14:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.449 18:14:12 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:26.449 18:14:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:26.449 18:14:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.449 18:14:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.449 ************************************ 00:07:26.449 START TEST accel_copy_crc32c 00:07:26.449 ************************************ 00:07:26.449 18:14:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:26.449 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:26.449 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:26.449 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.449 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.449 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:26.450 18:14:12 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:26.715 [2024-07-11 18:14:12.866202] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:26.715 [2024-07-11 18:14:12.866398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76451 ] 00:07:26.715 [2024-07-11 18:14:13.012481] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.715 [2024-07-11 18:14:13.044855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.715 18:14:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.090 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.091 00:07:28.091 real 0m1.371s 00:07:28.091 user 0m1.169s 00:07:28.091 sys 0m0.112s 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.091 18:14:14 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:28.091 ************************************ 00:07:28.091 END TEST accel_copy_crc32c 00:07:28.091 ************************************ 00:07:28.091 18:14:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.091 18:14:14 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.091 18:14:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:28.091 18:14:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.091 18:14:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.091 ************************************ 00:07:28.091 START TEST accel_copy_crc32c_C2 00:07:28.091 ************************************ 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.091 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:28.091 [2024-07-11 18:14:14.286353] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:28.091 [2024-07-11 18:14:14.286543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76481 ] 00:07:28.091 [2024-07-11 18:14:14.433165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.091 [2024-07-11 18:14:14.468301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.350 18:14:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.287 00:07:29.287 real 0m1.392s 00:07:29.287 user 0m1.189s 00:07:29.287 sys 0m0.114s 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.287 18:14:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:29.287 ************************************ 00:07:29.287 END TEST accel_copy_crc32c_C2 00:07:29.287 ************************************ 00:07:29.287 18:14:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.287 18:14:15 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:29.287 18:14:15 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.287 18:14:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.287 18:14:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.287 ************************************ 00:07:29.287 START TEST accel_dualcast 00:07:29.287 ************************************ 00:07:29.287 18:14:15 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:29.287 18:14:15 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:29.546 [2024-07-11 18:14:15.733576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:29.546 [2024-07-11 18:14:15.733773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76517 ] 00:07:29.546 [2024-07-11 18:14:15.881313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.546 [2024-07-11 18:14:15.924175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:29.806 18:14:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 ************************************ 00:07:30.744 END TEST accel_dualcast 00:07:30.744 ************************************ 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:30.744 18:14:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.744 00:07:30.744 real 0m1.401s 00:07:30.744 user 0m0.013s 00:07:30.744 sys 0m0.004s 00:07:30.744 18:14:17 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.744 18:14:17 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:30.744 18:14:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.744 18:14:17 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:30.744 18:14:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:30.744 18:14:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.744 18:14:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.744 ************************************ 00:07:30.744 START TEST accel_compare 00:07:30.744 ************************************ 00:07:30.744 18:14:17 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:30.744 18:14:17 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:31.003 [2024-07-11 18:14:17.195568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:31.003 [2024-07-11 18:14:17.195774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 00:07:31.003 [2024-07-11 18:14:17.340594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.003 [2024-07-11 18:14:17.374766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.003 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:31.263 18:14:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:32.196 18:14:18 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.196 00:07:32.196 real 0m1.396s 00:07:32.196 user 0m0.012s 00:07:32.196 sys 0m0.003s 00:07:32.196 18:14:18 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.196 ************************************ 00:07:32.196 END TEST accel_compare 00:07:32.196 ************************************ 00:07:32.196 18:14:18 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 18:14:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.196 18:14:18 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:32.196 18:14:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:32.196 18:14:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.196 18:14:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.196 ************************************ 00:07:32.196 START TEST accel_xor 00:07:32.196 ************************************ 00:07:32.196 18:14:18 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:32.196 18:14:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:32.454 [2024-07-11 18:14:18.639048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:32.454 [2024-07-11 18:14:18.639290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76583 ] 00:07:32.454 [2024-07-11 18:14:18.783548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.454 [2024-07-11 18:14:18.825612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.454 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:32.713 18:14:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:33.737 18:14:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.737 00:07:33.737 real 0m1.399s 00:07:33.737 user 0m0.018s 00:07:33.737 sys 0m0.000s 00:07:33.737 ************************************ 00:07:33.737 END TEST accel_xor 00:07:33.737 ************************************ 00:07:33.737 18:14:19 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.737 18:14:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 18:14:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.737 18:14:20 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:33.737 18:14:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:33.737 18:14:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.737 18:14:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.737 ************************************ 00:07:33.737 START TEST accel_xor 00:07:33.737 ************************************ 00:07:33.737 18:14:20 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:33.737 18:14:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:33.737 [2024-07-11 18:14:20.098996] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:33.737 [2024-07-11 18:14:20.099270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76623 ] 00:07:33.996 [2024-07-11 18:14:20.246426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.996 [2024-07-11 18:14:20.285407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:33.996 18:14:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:35.412 18:14:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.412 00:07:35.412 real 0m1.410s 00:07:35.412 user 0m1.213s 00:07:35.412 sys 0m0.105s 00:07:35.412 ************************************ 00:07:35.412 END TEST accel_xor 00:07:35.412 ************************************ 00:07:35.412 18:14:21 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.412 18:14:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:35.412 18:14:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.412 18:14:21 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:35.412 18:14:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:35.412 18:14:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.412 18:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.412 ************************************ 00:07:35.412 START TEST accel_dif_verify 00:07:35.412 ************************************ 00:07:35.412 18:14:21 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:35.412 [2024-07-11 18:14:21.556235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:35.412 [2024-07-11 18:14:21.556425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76653 ] 00:07:35.412 [2024-07-11 18:14:21.699826] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.412 [2024-07-11 18:14:21.737963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.412 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:35.413 18:14:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:36.792 18:14:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.792 00:07:36.792 real 0m1.394s 00:07:36.792 user 0m0.022s 00:07:36.792 sys 0m0.002s 00:07:36.792 18:14:22 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.792 ************************************ 00:07:36.792 END TEST accel_dif_verify 00:07:36.792 18:14:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:36.792 ************************************ 00:07:36.792 18:14:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.792 18:14:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:36.792 18:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:36.792 18:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.792 18:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.792 ************************************ 00:07:36.792 START TEST accel_dif_generate 00:07:36.792 ************************************ 00:07:36.792 18:14:22 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:36.792 18:14:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:36.792 [2024-07-11 18:14:23.005165] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:36.792 [2024-07-11 18:14:23.005334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76689 ] 00:07:36.792 [2024-07-11 18:14:23.154129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.792 [2024-07-11 18:14:23.192232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.052 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.053 18:14:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:37.988 18:14:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.988 00:07:37.988 real 0m1.399s 00:07:37.988 user 0m1.190s 00:07:37.988 sys 0m0.116s 00:07:37.988 18:14:24 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.988 18:14:24 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:37.988 ************************************ 00:07:37.988 END TEST accel_dif_generate 00:07:37.988 ************************************ 00:07:37.988 18:14:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.988 18:14:24 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:38.247 18:14:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:38.247 18:14:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.247 18:14:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.247 ************************************ 00:07:38.247 START TEST accel_dif_generate_copy 00:07:38.247 ************************************ 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:38.247 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:38.247 [2024-07-11 18:14:24.454850] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:38.247 [2024-07-11 18:14:24.455136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76724 ] 00:07:38.247 [2024-07-11 18:14:24.609212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.247 [2024-07-11 18:14:24.645108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.506 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:38.507 18:14:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.442 00:07:39.442 real 0m1.411s 00:07:39.442 user 0m1.194s 00:07:39.442 sys 0m0.125s 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.442 ************************************ 00:07:39.442 END TEST accel_dif_generate_copy 00:07:39.442 ************************************ 00:07:39.442 18:14:25 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.700 18:14:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.700 18:14:25 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:39.700 18:14:25 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.700 18:14:25 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:39.700 18:14:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.700 18:14:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.700 ************************************ 00:07:39.700 START TEST accel_comp 00:07:39.700 ************************************ 00:07:39.700 18:14:25 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:39.700 18:14:25 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:39.700 [2024-07-11 18:14:25.904108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:39.700 [2024-07-11 18:14:25.904278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76760 ] 00:07:39.700 [2024-07-11 18:14:26.050138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.700 [2024-07-11 18:14:26.087354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:39.958 18:14:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:40.890 18:14:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.890 00:07:40.890 real 0m1.384s 00:07:40.890 user 0m1.181s 00:07:40.890 sys 0m0.110s 00:07:40.890 18:14:27 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.890 ************************************ 00:07:40.890 END TEST accel_comp 00:07:40.890 ************************************ 00:07:40.890 18:14:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:40.890 18:14:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.891 18:14:27 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.891 18:14:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:40.891 18:14:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.891 18:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.891 ************************************ 00:07:40.891 START TEST accel_decomp 00:07:40.891 ************************************ 00:07:40.891 18:14:27 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.891 18:14:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:40.891 18:14:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:40.891 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:40.891 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:40.891 18:14:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:40.891 18:14:27 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:41.148 [2024-07-11 18:14:27.335419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:41.148 [2024-07-11 18:14:27.335576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76794 ] 00:07:41.148 [2024-07-11 18:14:27.473995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.148 [2024-07-11 18:14:27.509072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.148 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.149 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.406 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.406 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.406 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:41.406 18:14:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:41.406 18:14:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.406 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:41.406 18:14:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.336 18:14:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.337 18:14:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.337 18:14:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:42.337 18:14:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.337 00:07:42.337 real 0m1.374s 00:07:42.337 user 0m1.176s 00:07:42.337 sys 0m0.109s 00:07:42.337 18:14:28 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.337 18:14:28 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:42.337 ************************************ 00:07:42.337 END TEST accel_decomp 00:07:42.337 ************************************ 00:07:42.337 18:14:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.337 18:14:28 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.337 18:14:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:42.337 18:14:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.337 18:14:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.337 ************************************ 00:07:42.337 START TEST accel_decomp_full 00:07:42.337 ************************************ 00:07:42.337 18:14:28 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:42.337 18:14:28 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:42.594 [2024-07-11 18:14:28.765443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:42.594 [2024-07-11 18:14:28.765628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76825 ] 00:07:42.594 [2024-07-11 18:14:28.904418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.594 [2024-07-11 18:14:28.939428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:42.594 18:14:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.966 18:14:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.966 00:07:43.966 real 0m1.386s 00:07:43.966 user 0m1.202s 00:07:43.966 sys 0m0.096s 00:07:43.966 18:14:30 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.966 18:14:30 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:43.966 ************************************ 00:07:43.966 END TEST accel_decomp_full 00:07:43.966 ************************************ 00:07:43.966 18:14:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.966 18:14:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.966 18:14:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:43.966 18:14:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.966 18:14:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.966 ************************************ 00:07:43.966 START TEST accel_decomp_mcore 00:07:43.966 ************************************ 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:43.966 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:43.966 [2024-07-11 18:14:30.207275] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:43.966 [2024-07-11 18:14:30.207491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76861 ] 00:07:43.966 [2024-07-11 18:14:30.355462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.225 [2024-07-11 18:14:30.394826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.225 [2024-07-11 18:14:30.395019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.225 [2024-07-11 18:14:30.395447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.225 [2024-07-11 18:14:30.395531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:44.225 18:14:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.173 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.431 00:07:45.431 real 0m1.424s 00:07:45.431 user 0m0.021s 00:07:45.431 sys 0m0.001s 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.431 18:14:31 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:45.431 ************************************ 00:07:45.431 END TEST accel_decomp_mcore 00:07:45.431 ************************************ 00:07:45.431 18:14:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.431 18:14:31 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.431 18:14:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:45.431 18:14:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.431 18:14:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.431 ************************************ 00:07:45.431 START TEST accel_decomp_full_mcore 00:07:45.431 ************************************ 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:45.431 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:45.431 [2024-07-11 18:14:31.670339] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:45.431 [2024-07-11 18:14:31.670512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76899 ] 00:07:45.431 [2024-07-11 18:14:31.811576] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.689 [2024-07-11 18:14:31.850143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.689 [2024-07-11 18:14:31.850332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.689 [2024-07-11 18:14:31.850392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.689 [2024-07-11 18:14:31.850463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.689 18:14:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.622 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.879 00:07:46.879 real 0m1.404s 00:07:46.879 user 0m0.017s 00:07:46.879 sys 0m0.001s 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.879 18:14:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.879 ************************************ 00:07:46.879 END TEST accel_decomp_full_mcore 00:07:46.879 ************************************ 00:07:46.879 18:14:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.879 18:14:33 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.879 18:14:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:46.879 18:14:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.879 18:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.879 ************************************ 00:07:46.879 START TEST accel_decomp_mthread 00:07:46.879 ************************************ 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:46.879 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:46.879 [2024-07-11 18:14:33.117215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:46.879 [2024-07-11 18:14:33.117374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76938 ] 00:07:46.879 [2024-07-11 18:14:33.254300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.879 [2024-07-11 18:14:33.288792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:47.137 18:14:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.081 00:07:48.081 real 0m1.375s 00:07:48.081 user 0m1.184s 00:07:48.081 sys 0m0.103s 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.081 ************************************ 00:07:48.081 END TEST accel_decomp_mthread 00:07:48.081 ************************************ 00:07:48.081 18:14:34 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:48.355 18:14:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.355 18:14:34 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.355 18:14:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:48.355 18:14:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.355 18:14:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.355 ************************************ 00:07:48.355 START TEST accel_decomp_full_mthread 00:07:48.355 ************************************ 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:48.355 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:48.355 [2024-07-11 18:14:34.554665] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:48.355 [2024-07-11 18:14:34.554870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76968 ] 00:07:48.355 [2024-07-11 18:14:34.702869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.355 [2024-07-11 18:14:34.739987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.614 18:14:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.548 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.548 ************************************ 00:07:49.548 END TEST accel_decomp_full_mthread 00:07:49.548 ************************************ 00:07:49.549 18:14:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.549 00:07:49.549 real 0m1.431s 00:07:49.549 user 0m1.223s 00:07:49.549 sys 0m0.118s 00:07:49.549 18:14:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.549 18:14:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:49.807 18:14:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.807 18:14:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:49.807 18:14:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:49.807 18:14:35 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.807 18:14:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.807 18:14:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.807 18:14:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:49.807 18:14:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.807 18:14:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.807 18:14:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.807 18:14:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.807 18:14:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.807 18:14:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:49.807 18:14:35 accel -- accel/accel.sh@41 -- # jq -r . 00:07:49.807 ************************************ 00:07:49.807 START TEST accel_dif_functional_tests 00:07:49.807 ************************************ 00:07:49.807 18:14:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:49.807 [2024-07-11 18:14:36.070567] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:49.807 [2024-07-11 18:14:36.070759] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77010 ] 00:07:49.807 [2024-07-11 18:14:36.219790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.066 [2024-07-11 18:14:36.256906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.066 [2024-07-11 18:14:36.256982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.066 [2024-07-11 18:14:36.257049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.066 00:07:50.066 00:07:50.066 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.066 http://cunit.sourceforge.net/ 00:07:50.066 00:07:50.066 00:07:50.066 Suite: accel_dif 00:07:50.066 Test: verify: DIF generated, GUARD check ...passed 00:07:50.066 Test: verify: DIF generated, APPTAG check ...passed 00:07:50.066 Test: verify: DIF generated, REFTAG check ...passed 00:07:50.066 Test: verify: DIF not generated, GUARD check ...[2024-07-11 18:14:36.310601] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.066 passed 00:07:50.066 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 18:14:36.310927] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.066 passed 00:07:50.066 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 18:14:36.311276] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.066 passed 00:07:50.066 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:50.066 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 18:14:36.311631] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:50.066 passed 00:07:50.066 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:50.066 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:50.066 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:50.066 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 18:14:36.312247] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:50.066 passed 00:07:50.066 Test: verify copy: DIF generated, GUARD check ...passed 00:07:50.066 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:50.066 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:50.066 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:50.066 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-11 18:14:36.313021] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:50.066 [2024-07-11 18:14:36.313200] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:50.066 passed 00:07:50.066 Test: verify copy: DIF not generated, REFTAG check ...passed 00:07:50.066 Test: generate copy: DIF generated, GUARD check ...[2024-07-11 18:14:36.313433] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:50.066 passed 00:07:50.066 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:50.066 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:50.066 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:50.066 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:50.066 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:50.066 Test: generate copy: iovecs-len validate ...passed 00:07:50.066 Test: generate copy: buffer alignment validate ...[2024-07-11 18:14:36.314606] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:50.066 passed 00:07:50.066 00:07:50.066 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.066 suites 1 1 n/a 0 0 00:07:50.066 tests 26 26 26 0 0 00:07:50.066 asserts 115 115 115 0 n/a 00:07:50.066 00:07:50.066 Elapsed time = 0.009 seconds 00:07:50.325 00:07:50.325 real 0m0.511s 00:07:50.325 user 0m0.541s 00:07:50.325 sys 0m0.166s 00:07:50.325 18:14:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.325 ************************************ 00:07:50.325 END TEST accel_dif_functional_tests 00:07:50.325 ************************************ 00:07:50.325 18:14:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 18:14:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.325 00:07:50.325 real 0m32.211s 00:07:50.325 user 0m33.653s 00:07:50.325 sys 0m3.885s 00:07:50.325 18:14:36 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.325 18:14:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 ************************************ 00:07:50.325 END TEST accel 00:07:50.325 ************************************ 00:07:50.325 18:14:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:50.325 18:14:36 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:50.325 18:14:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.325 18:14:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.325 18:14:36 -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 ************************************ 00:07:50.325 START TEST accel_rpc 00:07:50.325 ************************************ 00:07:50.325 18:14:36 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:50.325 * Looking for test storage... 00:07:50.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:50.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.325 18:14:36 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.325 18:14:36 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77070 00:07:50.325 18:14:36 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77070 00:07:50.325 18:14:36 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:50.325 18:14:36 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 77070 ']' 00:07:50.325 18:14:36 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.325 18:14:36 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.325 18:14:36 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.325 18:14:36 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.325 18:14:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.584 [2024-07-11 18:14:36.780824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:50.584 [2024-07-11 18:14:36.781021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77070 ] 00:07:50.584 [2024-07-11 18:14:36.927805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.584 [2024-07-11 18:14:36.966254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.517 18:14:37 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.517 18:14:37 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:51.517 18:14:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:51.517 18:14:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:51.517 18:14:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:51.517 18:14:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:51.517 18:14:37 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:51.517 18:14:37 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.517 18:14:37 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.517 18:14:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.517 ************************************ 00:07:51.517 START TEST accel_assign_opcode 00:07:51.517 ************************************ 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.517 [2024-07-11 18:14:37.715143] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.517 [2024-07-11 18:14:37.723138] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.517 software 00:07:51.517 00:07:51.517 real 0m0.204s 00:07:51.517 user 0m0.055s 00:07:51.517 sys 0m0.009s 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.517 ************************************ 00:07:51.517 END TEST accel_assign_opcode 00:07:51.517 ************************************ 00:07:51.517 18:14:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:51.775 18:14:37 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:51.775 18:14:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77070 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 77070 ']' 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 77070 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77070 00:07:51.776 killing process with pid 77070 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77070' 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@967 -- # kill 77070 00:07:51.776 18:14:37 accel_rpc -- common/autotest_common.sh@972 -- # wait 77070 00:07:52.034 ************************************ 00:07:52.034 END TEST accel_rpc 00:07:52.034 ************************************ 00:07:52.034 00:07:52.034 real 0m1.684s 00:07:52.034 user 0m1.844s 00:07:52.034 sys 0m0.375s 00:07:52.034 18:14:38 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.034 18:14:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.034 18:14:38 -- common/autotest_common.sh@1142 -- # return 0 00:07:52.034 18:14:38 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.034 18:14:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.034 18:14:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.034 18:14:38 -- common/autotest_common.sh@10 -- # set +x 00:07:52.034 ************************************ 00:07:52.034 START TEST app_cmdline 00:07:52.034 ************************************ 00:07:52.034 18:14:38 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.034 * Looking for test storage... 00:07:52.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:52.034 18:14:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:52.034 18:14:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77164 00:07:52.034 18:14:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:52.034 18:14:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77164 00:07:52.034 18:14:38 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 77164 ']' 00:07:52.034 18:14:38 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.034 18:14:38 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.034 18:14:38 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.034 18:14:38 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.034 18:14:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.293 [2024-07-11 18:14:38.494257] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:52.293 [2024-07-11 18:14:38.494638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77164 ] 00:07:52.293 [2024-07-11 18:14:38.639367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.293 [2024-07-11 18:14:38.675526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.227 18:14:39 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.227 18:14:39 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:53.227 18:14:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:53.227 { 00:07:53.227 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:53.227 "fields": { 00:07:53.227 "major": 24, 00:07:53.227 "minor": 9, 00:07:53.227 "patch": 0, 00:07:53.227 "suffix": "-pre", 00:07:53.227 "commit": "719d03c6a" 00:07:53.227 } 00:07:53.227 } 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:53.485 18:14:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.485 18:14:39 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.743 request: 00:07:53.743 { 00:07:53.743 "method": "env_dpdk_get_mem_stats", 00:07:53.743 "req_id": 1 00:07:53.743 } 00:07:53.743 Got JSON-RPC error response 00:07:53.743 response: 00:07:53.743 { 00:07:53.743 "code": -32601, 00:07:53.743 "message": "Method not found" 00:07:53.743 } 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.743 18:14:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77164 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 77164 ']' 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 77164 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77164 00:07:53.743 killing process with pid 77164 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77164' 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@967 -- # kill 77164 00:07:53.743 18:14:39 app_cmdline -- common/autotest_common.sh@972 -- # wait 77164 00:07:54.002 00:07:54.002 real 0m1.927s 00:07:54.002 user 0m2.434s 00:07:54.002 sys 0m0.394s 00:07:54.002 18:14:40 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.002 18:14:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.002 ************************************ 00:07:54.002 END TEST app_cmdline 00:07:54.002 ************************************ 00:07:54.002 18:14:40 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.002 18:14:40 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.002 18:14:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.002 18:14:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.002 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:07:54.002 ************************************ 00:07:54.002 START TEST version 00:07:54.002 ************************************ 00:07:54.002 18:14:40 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.002 * Looking for test storage... 00:07:54.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.002 18:14:40 version -- app/version.sh@17 -- # get_header_version major 00:07:54.002 18:14:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.002 18:14:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.002 18:14:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.002 18:14:40 version -- app/version.sh@17 -- # major=24 00:07:54.002 18:14:40 version -- app/version.sh@18 -- # get_header_version minor 00:07:54.002 18:14:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.002 18:14:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.002 18:14:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.002 18:14:40 version -- app/version.sh@18 -- # minor=9 00:07:54.002 18:14:40 version -- app/version.sh@19 -- # get_header_version patch 00:07:54.002 18:14:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.002 18:14:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.002 18:14:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.002 18:14:40 version -- app/version.sh@19 -- # patch=0 00:07:54.262 18:14:40 version -- app/version.sh@20 -- # get_header_version suffix 00:07:54.262 18:14:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.262 18:14:40 version -- app/version.sh@14 -- # cut -f2 00:07:54.262 18:14:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.262 18:14:40 version -- app/version.sh@20 -- # suffix=-pre 00:07:54.262 18:14:40 version -- app/version.sh@22 -- # version=24.9 00:07:54.262 18:14:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:54.262 18:14:40 version -- app/version.sh@28 -- # version=24.9rc0 00:07:54.262 18:14:40 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:54.262 18:14:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:54.262 18:14:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:54.262 18:14:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:54.262 00:07:54.262 real 0m0.156s 00:07:54.262 user 0m0.092s 00:07:54.262 sys 0m0.098s 00:07:54.262 ************************************ 00:07:54.262 END TEST version 00:07:54.262 ************************************ 00:07:54.262 18:14:40 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.262 18:14:40 version -- common/autotest_common.sh@10 -- # set +x 00:07:54.262 18:14:40 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.262 18:14:40 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:54.262 18:14:40 -- spdk/autotest.sh@198 -- # uname -s 00:07:54.262 18:14:40 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:54.262 18:14:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:54.262 18:14:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:54.262 18:14:40 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:07:54.262 18:14:40 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:54.262 18:14:40 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.262 18:14:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.262 18:14:40 -- common/autotest_common.sh@10 -- # set +x 00:07:54.262 ************************************ 00:07:54.262 START TEST blockdev_nvme 00:07:54.262 ************************************ 00:07:54.262 18:14:40 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:54.262 * Looking for test storage... 00:07:54.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:54.262 18:14:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:54.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77309 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:54.262 18:14:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77309 00:07:54.262 18:14:40 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 77309 ']' 00:07:54.262 18:14:40 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.262 18:14:40 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.262 18:14:40 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.262 18:14:40 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.262 18:14:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.521 [2024-07-11 18:14:40.713659] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:54.521 [2024-07-11 18:14:40.714044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77309 ] 00:07:54.521 [2024-07-11 18:14:40.857715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.521 [2024-07-11 18:14:40.898432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.457 18:14:41 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.457 18:14:41 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:07:55.457 18:14:41 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:55.457 18:14:41 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:07:55.457 18:14:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:55.457 18:14:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:55.457 18:14:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.458 18:14:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:55.458 18:14:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.458 18:14:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.716 18:14:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.716 18:14:41 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:55.716 18:14:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.716 18:14:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.716 18:14:42 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:07:55.716 18:14:42 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.716 18:14:42 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.716 18:14:42 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.716 18:14:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:55.716 18:14:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.716 18:14:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.716 18:14:42 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:55.976 18:14:42 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.976 18:14:42 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:55.976 18:14:42 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:55.976 18:14:42 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "699efa9b-ed9e-4794-b7fa-df7c08734e97"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "699efa9b-ed9e-4794-b7fa-df7c08734e97",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b23b0f6a-55f1-4bb4-9e1f-2576f049c302"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b23b0f6a-55f1-4bb4-9e1f-2576f049c302",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "be1fd749-bb89-41a3-a975-359018ae983a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "be1fd749-bb89-41a3-a975-359018ae983a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c1d1b2ec-d513-4b25-bb6b-89ca08cf6cfa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1d1b2ec-d513-4b25-bb6b-89ca08cf6cfa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3fe0298f-931a-4ee0-8ecd-7fe54c7a60a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3fe0298f-931a-4ee0-8ecd-7fe54c7a60a2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7d8566d4-cbd6-4069-8ad0-1ac8b3c2bfcc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7d8566d4-cbd6-4069-8ad0-1ac8b3c2bfcc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:55.976 18:14:42 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:55.976 18:14:42 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:07:55.976 18:14:42 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:55.976 18:14:42 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 77309 00:07:55.976 18:14:42 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 77309 ']' 00:07:55.976 18:14:42 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 77309 00:07:55.976 18:14:42 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:07:55.977 18:14:42 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.977 18:14:42 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77309 00:07:55.977 killing process with pid 77309 00:07:55.977 18:14:42 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.977 18:14:42 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.977 18:14:42 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77309' 00:07:55.977 18:14:42 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 77309 00:07:55.977 18:14:42 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 77309 00:07:56.236 18:14:42 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:56.236 18:14:42 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.236 18:14:42 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:56.236 18:14:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.236 18:14:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.236 ************************************ 00:07:56.236 START TEST bdev_hello_world 00:07:56.236 ************************************ 00:07:56.236 18:14:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.236 [2024-07-11 18:14:42.644962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:56.236 [2024-07-11 18:14:42.645229] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77382 ] 00:07:56.495 [2024-07-11 18:14:42.792940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.495 [2024-07-11 18:14:42.829243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.065 [2024-07-11 18:14:43.190637] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:57.065 [2024-07-11 18:14:43.190683] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:57.065 [2024-07-11 18:14:43.190724] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:57.065 [2024-07-11 18:14:43.193042] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:57.065 [2024-07-11 18:14:43.193482] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:57.065 [2024-07-11 18:14:43.193524] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:57.065 [2024-07-11 18:14:43.193789] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:57.065 00:07:57.065 [2024-07-11 18:14:43.193862] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:57.065 00:07:57.065 real 0m0.824s 00:07:57.065 user 0m0.552s 00:07:57.065 sys 0m0.168s 00:07:57.065 18:14:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.065 ************************************ 00:07:57.065 END TEST bdev_hello_world 00:07:57.065 ************************************ 00:07:57.065 18:14:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 18:14:43 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:57.065 18:14:43 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:57.065 18:14:43 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.065 18:14:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.065 18:14:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.065 ************************************ 00:07:57.065 START TEST bdev_bounds 00:07:57.065 ************************************ 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:57.065 Process bdevio pid: 77413 00:07:57.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77413 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77413' 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77413 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 77413 ']' 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.065 18:14:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:57.323 [2024-07-11 18:14:43.525857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:57.323 [2024-07-11 18:14:43.526380] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77413 ] 00:07:57.323 [2024-07-11 18:14:43.676884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.323 [2024-07-11 18:14:43.713070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.323 [2024-07-11 18:14:43.713143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.323 [2024-07-11 18:14:43.713223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.274 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.274 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:58.274 18:14:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:58.274 I/O targets: 00:07:58.274 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:58.274 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:58.274 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.274 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.274 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:58.274 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:58.274 00:07:58.274 00:07:58.274 CUnit - A unit testing framework for C - Version 2.1-3 00:07:58.274 http://cunit.sourceforge.net/ 00:07:58.274 00:07:58.274 00:07:58.274 Suite: bdevio tests on: Nvme3n1 00:07:58.274 Test: blockdev write read block ...passed 00:07:58.274 Test: blockdev write zeroes read block ...passed 00:07:58.274 Test: blockdev write zeroes read no split ...passed 00:07:58.274 Test: blockdev write zeroes read split ...passed 00:07:58.274 Test: blockdev write zeroes read split partial ...passed 00:07:58.274 Test: blockdev reset ...[2024-07-11 18:14:44.547723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:58.274 passed 00:07:58.274 Test: blockdev write read 8 blocks ...[2024-07-11 18:14:44.550239] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.274 passed 00:07:58.274 Test: blockdev write read size > 128k ...passed 00:07:58.274 Test: blockdev write read invalid size ...passed 00:07:58.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.275 Test: blockdev write read max offset ...passed 00:07:58.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.275 Test: blockdev writev readv 8 blocks ...passed 00:07:58.275 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.275 Test: blockdev writev readv block ...passed 00:07:58.275 Test: blockdev writev readv size > 128k ...passed 00:07:58.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.275 Test: blockdev comparev and writev ...[2024-07-11 18:14:44.557169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afc0e000 len:0x1000 00:07:58.275 [2024-07-11 18:14:44.557230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme passthru rw ...passed 00:07:58.275 Test: blockdev nvme passthru vendor specific ...[2024-07-11 18:14:44.558039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.275 [2024-07-11 18:14:44.558100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme admin passthru ...passed 00:07:58.275 Test: blockdev copy ...passed 00:07:58.275 Suite: bdevio tests on: Nvme2n3 00:07:58.275 Test: blockdev write read block ...passed 00:07:58.275 Test: blockdev write zeroes read block ...passed 00:07:58.275 Test: blockdev write zeroes read no split ...passed 00:07:58.275 Test: blockdev write zeroes read split ...passed 00:07:58.275 Test: blockdev write zeroes read split partial ...passed 00:07:58.275 Test: blockdev reset ...[2024-07-11 18:14:44.582043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:58.275 passed 00:07:58.275 Test: blockdev write read 8 blocks ...[2024-07-11 18:14:44.584963] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.275 passed 00:07:58.275 Test: blockdev write read size > 128k ...passed 00:07:58.275 Test: blockdev write read invalid size ...passed 00:07:58.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.275 Test: blockdev write read max offset ...passed 00:07:58.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.275 Test: blockdev writev readv 8 blocks ...passed 00:07:58.275 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.275 Test: blockdev writev readv block ...passed 00:07:58.275 Test: blockdev writev readv size > 128k ...passed 00:07:58.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.275 Test: blockdev comparev and writev ...[2024-07-11 18:14:44.591310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bae09000 len:0x1000 00:07:58.275 [2024-07-11 18:14:44.591366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme passthru rw ...passed 00:07:58.275 Test: blockdev nvme passthru vendor specific ...[2024-07-11 18:14:44.592227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.275 [2024-07-11 18:14:44.592285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme admin passthru ...passed 00:07:58.275 Test: blockdev copy ...passed 00:07:58.275 Suite: bdevio tests on: Nvme2n2 00:07:58.275 Test: blockdev write read block ...passed 00:07:58.275 Test: blockdev write zeroes read block ...passed 00:07:58.275 Test: blockdev write zeroes read no split ...passed 00:07:58.275 Test: blockdev write zeroes read split ...passed 00:07:58.275 Test: blockdev write zeroes read split partial ...passed 00:07:58.275 Test: blockdev reset ...[2024-07-11 18:14:44.616294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:58.275 passed 00:07:58.275 Test: blockdev write read 8 blocks ...[2024-07-11 18:14:44.618868] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.275 passed 00:07:58.275 Test: blockdev write read size > 128k ...passed 00:07:58.275 Test: blockdev write read invalid size ...passed 00:07:58.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.275 Test: blockdev write read max offset ...passed 00:07:58.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.275 Test: blockdev writev readv 8 blocks ...passed 00:07:58.275 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.275 Test: blockdev writev readv block ...passed 00:07:58.275 Test: blockdev writev readv size > 128k ...passed 00:07:58.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.275 Test: blockdev comparev and writev ...[2024-07-11 18:14:44.625247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afc06000 len:0x1000 00:07:58.275 [2024-07-11 18:14:44.625301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme passthru rw ...passed 00:07:58.275 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.275 Test: blockdev nvme admin passthru ...[2024-07-11 18:14:44.626125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.275 [2024-07-11 18:14:44.626173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev copy ...passed 00:07:58.275 Suite: bdevio tests on: Nvme2n1 00:07:58.275 Test: blockdev write read block ...passed 00:07:58.275 Test: blockdev write zeroes read block ...passed 00:07:58.275 Test: blockdev write zeroes read no split ...passed 00:07:58.275 Test: blockdev write zeroes read split ...passed 00:07:58.275 Test: blockdev write zeroes read split partial ...passed 00:07:58.275 Test: blockdev reset ...[2024-07-11 18:14:44.638328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:58.275 passed 00:07:58.275 Test: blockdev write read 8 blocks ...[2024-07-11 18:14:44.640616] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.275 passed 00:07:58.275 Test: blockdev write read size > 128k ...passed 00:07:58.275 Test: blockdev write read invalid size ...passed 00:07:58.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.275 Test: blockdev write read max offset ...passed 00:07:58.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.275 Test: blockdev writev readv 8 blocks ...passed 00:07:58.275 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.275 Test: blockdev writev readv block ...passed 00:07:58.275 Test: blockdev writev readv size > 128k ...passed 00:07:58.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.275 Test: blockdev comparev and writev ...[2024-07-11 18:14:44.646421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afc02000 len:0x1000 00:07:58.275 [2024-07-11 18:14:44.646487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme passthru rw ...passed 00:07:58.275 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.275 Test: blockdev nvme admin passthru ...[2024-07-11 18:14:44.647297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.275 [2024-07-11 18:14:44.647359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev copy ...passed 00:07:58.275 Suite: bdevio tests on: Nvme1n1 00:07:58.275 Test: blockdev write read block ...passed 00:07:58.275 Test: blockdev write zeroes read block ...passed 00:07:58.275 Test: blockdev write zeroes read no split ...passed 00:07:58.275 Test: blockdev write zeroes read split ...passed 00:07:58.275 Test: blockdev write zeroes read split partial ...passed 00:07:58.275 Test: blockdev reset ...[2024-07-11 18:14:44.659806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:58.275 passed 00:07:58.275 Test: blockdev write read 8 blocks ...[2024-07-11 18:14:44.661953] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.275 passed 00:07:58.275 Test: blockdev write read size > 128k ...passed 00:07:58.275 Test: blockdev write read invalid size ...passed 00:07:58.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.275 Test: blockdev write read max offset ...passed 00:07:58.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.275 Test: blockdev writev readv 8 blocks ...passed 00:07:58.275 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.275 Test: blockdev writev readv block ...passed 00:07:58.275 Test: blockdev writev readv size > 128k ...passed 00:07:58.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.275 Test: blockdev comparev and writev ...[2024-07-11 18:14:44.667710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af802000 len:0x1000 00:07:58.275 [2024-07-11 18:14:44.667808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme passthru rw ...passed 00:07:58.275 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.275 Test: blockdev nvme admin passthru ...[2024-07-11 18:14:44.668606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:58.275 [2024-07-11 18:14:44.668653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:58.275 passed 00:07:58.275 Test: blockdev copy ...passed 00:07:58.275 Suite: bdevio tests on: Nvme0n1 00:07:58.275 Test: blockdev write read block ...passed 00:07:58.275 Test: blockdev write zeroes read block ...passed 00:07:58.275 Test: blockdev write zeroes read no split ...passed 00:07:58.275 Test: blockdev write zeroes read split ...passed 00:07:58.275 Test: blockdev write zeroes read split partial ...passed 00:07:58.275 Test: blockdev reset ...[2024-07-11 18:14:44.682634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:58.275 passed 00:07:58.275 Test: blockdev write read 8 blocks ...[2024-07-11 18:14:44.684881] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:58.275 passed 00:07:58.275 Test: blockdev write read size > 128k ...passed 00:07:58.275 Test: blockdev write read invalid size ...passed 00:07:58.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.534 Test: blockdev write read max offset ...passed 00:07:58.534 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.534 Test: blockdev writev readv 8 blocks ...passed 00:07:58.534 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.534 Test: blockdev writev readv block ...passed 00:07:58.534 Test: blockdev writev readv size > 128k ...passed 00:07:58.534 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.534 Test: blockdev comparev and writev ...passed 00:07:58.534 Test: blockdev nvme passthru rw ...[2024-07-11 18:14:44.690319] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:58.534 separate metadata which is not supported yet. 00:07:58.534 passed 00:07:58.534 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.534 Test: blockdev nvme admin passthru ...[2024-07-11 18:14:44.690895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:58.534 [2024-07-11 18:14:44.690951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:58.534 passed 00:07:58.534 Test: blockdev copy ...passed 00:07:58.534 00:07:58.534 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.534 suites 6 6 n/a 0 0 00:07:58.534 tests 138 138 138 0 0 00:07:58.534 asserts 893 893 893 0 n/a 00:07:58.534 00:07:58.534 Elapsed time = 0.359 seconds 00:07:58.534 0 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77413 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 77413 ']' 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 77413 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77413 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77413' 00:07:58.534 killing process with pid 77413 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 77413 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 77413 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:58.534 00:07:58.534 real 0m1.477s 00:07:58.534 user 0m3.784s 00:07:58.534 sys 0m0.293s 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.534 18:14:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:58.534 ************************************ 00:07:58.534 END TEST bdev_bounds 00:07:58.534 ************************************ 00:07:58.793 18:14:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:58.793 18:14:44 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:58.793 18:14:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:58.793 18:14:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.793 18:14:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 ************************************ 00:07:58.793 START TEST bdev_nbd 00:07:58.793 ************************************ 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77456 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77456 /var/tmp/spdk-nbd.sock 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 77456 ']' 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:58.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.793 18:14:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 [2024-07-11 18:14:45.060589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:58.793 [2024-07-11 18:14:45.060792] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.052 [2024-07-11 18:14:45.209075] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.052 [2024-07-11 18:14:45.244009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.618 18:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.618 18:14:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:07:59.618 18:14:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:59.618 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.618 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:59.619 18:14:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.877 1+0 records in 00:07:59.877 1+0 records out 00:07:59.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529883 s, 7.7 MB/s 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:59.877 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.444 1+0 records in 00:08:00.444 1+0 records out 00:08:00.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517416 s, 7.9 MB/s 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.444 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.703 1+0 records in 00:08:00.703 1+0 records out 00:08:00.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501638 s, 8.2 MB/s 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.703 18:14:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.962 1+0 records in 00:08:00.962 1+0 records out 00:08:00.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616606 s, 6.6 MB/s 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:00.962 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:01.220 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.221 1+0 records in 00:08:01.221 1+0 records out 00:08:01.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493689 s, 8.3 MB/s 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.221 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.479 1+0 records in 00:08:01.479 1+0 records out 00:08:01.479 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620591 s, 6.6 MB/s 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:01.479 18:14:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.737 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:01.737 { 00:08:01.737 "nbd_device": "/dev/nbd0", 00:08:01.737 "bdev_name": "Nvme0n1" 00:08:01.737 }, 00:08:01.737 { 00:08:01.737 "nbd_device": "/dev/nbd1", 00:08:01.737 "bdev_name": "Nvme1n1" 00:08:01.737 }, 00:08:01.737 { 00:08:01.737 "nbd_device": "/dev/nbd2", 00:08:01.737 "bdev_name": "Nvme2n1" 00:08:01.737 }, 00:08:01.737 { 00:08:01.737 "nbd_device": "/dev/nbd3", 00:08:01.737 "bdev_name": "Nvme2n2" 00:08:01.737 }, 00:08:01.737 { 00:08:01.737 "nbd_device": "/dev/nbd4", 00:08:01.737 "bdev_name": "Nvme2n3" 00:08:01.737 }, 00:08:01.737 { 00:08:01.737 "nbd_device": "/dev/nbd5", 00:08:01.737 "bdev_name": "Nvme3n1" 00:08:01.737 } 00:08:01.737 ]' 00:08:01.737 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:01.737 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:01.737 { 00:08:01.737 "nbd_device": "/dev/nbd0", 00:08:01.737 "bdev_name": "Nvme0n1" 00:08:01.737 }, 00:08:01.738 { 00:08:01.738 "nbd_device": "/dev/nbd1", 00:08:01.738 "bdev_name": "Nvme1n1" 00:08:01.738 }, 00:08:01.738 { 00:08:01.738 "nbd_device": "/dev/nbd2", 00:08:01.738 "bdev_name": "Nvme2n1" 00:08:01.738 }, 00:08:01.738 { 00:08:01.738 "nbd_device": "/dev/nbd3", 00:08:01.738 "bdev_name": "Nvme2n2" 00:08:01.738 }, 00:08:01.738 { 00:08:01.738 "nbd_device": "/dev/nbd4", 00:08:01.738 "bdev_name": "Nvme2n3" 00:08:01.738 }, 00:08:01.738 { 00:08:01.738 "nbd_device": "/dev/nbd5", 00:08:01.738 "bdev_name": "Nvme3n1" 00:08:01.738 } 00:08:01.738 ]' 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.738 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.996 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.255 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.513 18:14:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.772 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.030 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.635 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.636 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.636 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.636 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.636 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.636 18:14:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:03.636 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:04.200 /dev/nbd0 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.200 1+0 records in 00:08:04.200 1+0 records out 00:08:04.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924801 s, 4.4 MB/s 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.200 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:04.458 /dev/nbd1 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.458 1+0 records in 00:08:04.458 1+0 records out 00:08:04.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737957 s, 5.6 MB/s 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.458 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:04.717 /dev/nbd10 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.717 1+0 records in 00:08:04.717 1+0 records out 00:08:04.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745108 s, 5.5 MB/s 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.717 18:14:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:04.975 /dev/nbd11 00:08:04.975 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:04.975 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.976 1+0 records in 00:08:04.976 1+0 records out 00:08:04.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574116 s, 7.1 MB/s 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:04.976 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:05.234 /dev/nbd12 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.234 1+0 records in 00:08:05.234 1+0 records out 00:08:05.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096553 s, 4.2 MB/s 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.234 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:05.802 /dev/nbd13 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.802 1+0 records in 00:08:05.802 1+0 records out 00:08:05.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718312 s, 5.7 MB/s 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.802 18:14:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd0", 00:08:06.061 "bdev_name": "Nvme0n1" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd1", 00:08:06.061 "bdev_name": "Nvme1n1" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd10", 00:08:06.061 "bdev_name": "Nvme2n1" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd11", 00:08:06.061 "bdev_name": "Nvme2n2" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd12", 00:08:06.061 "bdev_name": "Nvme2n3" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd13", 00:08:06.061 "bdev_name": "Nvme3n1" 00:08:06.061 } 00:08:06.061 ]' 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd0", 00:08:06.061 "bdev_name": "Nvme0n1" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd1", 00:08:06.061 "bdev_name": "Nvme1n1" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd10", 00:08:06.061 "bdev_name": "Nvme2n1" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd11", 00:08:06.061 "bdev_name": "Nvme2n2" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd12", 00:08:06.061 "bdev_name": "Nvme2n3" 00:08:06.061 }, 00:08:06.061 { 00:08:06.061 "nbd_device": "/dev/nbd13", 00:08:06.061 "bdev_name": "Nvme3n1" 00:08:06.061 } 00:08:06.061 ]' 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.061 /dev/nbd1 00:08:06.061 /dev/nbd10 00:08:06.061 /dev/nbd11 00:08:06.061 /dev/nbd12 00:08:06.061 /dev/nbd13' 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.061 /dev/nbd1 00:08:06.061 /dev/nbd10 00:08:06.061 /dev/nbd11 00:08:06.061 /dev/nbd12 00:08:06.061 /dev/nbd13' 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:06.061 256+0 records in 00:08:06.061 256+0 records out 00:08:06.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070805 s, 148 MB/s 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.061 256+0 records in 00:08:06.061 256+0 records out 00:08:06.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146512 s, 7.2 MB/s 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.061 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.320 256+0 records in 00:08:06.320 256+0 records out 00:08:06.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179959 s, 5.8 MB/s 00:08:06.320 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.320 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:06.578 256+0 records in 00:08:06.578 256+0 records out 00:08:06.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16672 s, 6.3 MB/s 00:08:06.578 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.578 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:06.842 256+0 records in 00:08:06.842 256+0 records out 00:08:06.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166391 s, 6.3 MB/s 00:08:06.842 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.842 18:14:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:06.842 256+0 records in 00:08:06.842 256+0 records out 00:08:06.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180741 s, 5.8 MB/s 00:08:06.842 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.842 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:07.099 256+0 records in 00:08:07.099 256+0 records out 00:08:07.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180033 s, 5.8 MB/s 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.099 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.358 18:14:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.616 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.617 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.203 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.462 18:14:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.027 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:09.286 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:09.544 malloc_lvol_verify 00:08:09.544 18:14:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:09.802 b2078f2e-0066-448a-9fc8-ac17733ef821 00:08:09.802 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:10.060 d419cda6-0e25-4677-95d6-0d16940476e0 00:08:10.060 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:10.319 /dev/nbd0 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:10.320 mke2fs 1.46.5 (30-Dec-2021) 00:08:10.320 Discarding device blocks: 0/4096 done 00:08:10.320 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:10.320 00:08:10.320 Allocating group tables: 0/1 done 00:08:10.320 Writing inode tables: 0/1 done 00:08:10.320 Creating journal (1024 blocks): done 00:08:10.320 Writing superblocks and filesystem accounting information: 0/1 done 00:08:10.320 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.320 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77456 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 77456 ']' 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 77456 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77456 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.579 killing process with pid 77456 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77456' 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 77456 00:08:10.579 18:14:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 77456 00:08:10.837 18:14:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:10.838 00:08:10.838 real 0m12.073s 00:08:10.838 user 0m17.603s 00:08:10.838 sys 0m4.109s 00:08:10.838 18:14:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.838 18:14:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:10.838 ************************************ 00:08:10.838 END TEST bdev_nbd 00:08:10.838 ************************************ 00:08:10.838 18:14:57 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:10.838 18:14:57 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:10.838 18:14:57 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:10.838 skipping fio tests on NVMe due to multi-ns failures. 00:08:10.838 18:14:57 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:10.838 18:14:57 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:10.838 18:14:57 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:10.838 18:14:57 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:10.838 18:14:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.838 18:14:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.838 ************************************ 00:08:10.838 START TEST bdev_verify 00:08:10.838 ************************************ 00:08:10.838 18:14:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:10.838 [2024-07-11 18:14:57.166265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:10.838 [2024-07-11 18:14:57.166440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77858 ] 00:08:11.097 [2024-07-11 18:14:57.303697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.097 [2024-07-11 18:14:57.338570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.097 [2024-07-11 18:14:57.338626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.355 Running I/O for 5 seconds... 00:08:16.627 00:08:16.627 Latency(us) 00:08:16.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.627 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x0 length 0xbd0bd 00:08:16.627 Nvme0n1 : 5.05 1495.59 5.84 0.00 0.00 85144.58 16681.89 80073.08 00:08:16.627 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:16.627 Nvme0n1 : 5.07 1541.28 6.02 0.00 0.00 82772.06 17396.83 84362.71 00:08:16.627 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x0 length 0xa0000 00:08:16.627 Nvme1n1 : 5.09 1496.32 5.85 0.00 0.00 84864.89 11975.21 74353.57 00:08:16.627 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0xa0000 length 0xa0000 00:08:16.627 Nvme1n1 : 5.07 1540.69 6.02 0.00 0.00 82599.27 19303.33 74353.57 00:08:16.627 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x0 length 0x80000 00:08:16.627 Nvme2n1 : 5.09 1495.45 5.84 0.00 0.00 84699.29 11975.21 72923.69 00:08:16.627 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x80000 length 0x80000 00:08:16.627 Nvme2n1 : 5.07 1540.10 6.02 0.00 0.00 82417.82 17873.45 71017.19 00:08:16.627 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x0 length 0x80000 00:08:16.627 Nvme2n2 : 5.11 1502.84 5.87 0.00 0.00 84413.66 11439.01 71493.82 00:08:16.627 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x80000 length 0x80000 00:08:16.627 Nvme2n2 : 5.07 1539.54 6.01 0.00 0.00 82225.63 17039.36 71017.19 00:08:16.627 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x0 length 0x80000 00:08:16.627 Nvme2n3 : 5.11 1502.08 5.87 0.00 0.00 84276.93 11677.32 72447.07 00:08:16.627 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x80000 length 0x80000 00:08:16.627 Nvme2n3 : 5.08 1548.84 6.05 0.00 0.00 81631.47 4944.99 73876.95 00:08:16.627 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x0 length 0x20000 00:08:16.627 Nvme3n1 : 5.12 1501.37 5.86 0.00 0.00 84132.39 11856.06 75783.45 00:08:16.627 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:16.627 Verification LBA range: start 0x20000 length 0x20000 00:08:16.627 Nvme3n1 : 5.08 1548.21 6.05 0.00 0.00 81493.89 4825.83 76736.70 00:08:16.627 =================================================================================================================== 00:08:16.627 Total : 18252.32 71.30 0.00 0.00 83372.54 4825.83 84362.71 00:08:16.885 00:08:16.885 real 0m6.043s 00:08:16.885 user 0m11.337s 00:08:16.885 sys 0m0.196s 00:08:16.885 18:15:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.885 18:15:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:16.885 ************************************ 00:08:16.885 END TEST bdev_verify 00:08:16.885 ************************************ 00:08:16.885 18:15:03 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:16.885 18:15:03 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.885 18:15:03 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:16.885 18:15:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.885 18:15:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.885 ************************************ 00:08:16.885 START TEST bdev_verify_big_io 00:08:16.885 ************************************ 00:08:16.885 18:15:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:16.886 [2024-07-11 18:15:03.282417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:16.886 [2024-07-11 18:15:03.282618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77940 ] 00:08:17.145 [2024-07-11 18:15:03.432808] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.145 [2024-07-11 18:15:03.475426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.145 [2024-07-11 18:15:03.475466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.709 Running I/O for 5 seconds... 00:08:24.351 00:08:24.351 Latency(us) 00:08:24.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.351 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x0 length 0xbd0b 00:08:24.351 Nvme0n1 : 5.61 114.06 7.13 0.00 0.00 1076850.59 21805.61 1044763.00 00:08:24.351 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:24.351 Nvme0n1 : 5.84 114.54 7.16 0.00 0.00 1073497.05 27286.81 1311673.25 00:08:24.351 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x0 length 0xa000 00:08:24.351 Nvme1n1 : 5.78 113.86 7.12 0.00 0.00 1033924.84 71493.82 968502.92 00:08:24.351 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0xa000 length 0xa000 00:08:24.351 Nvme1n1 : 5.75 119.61 7.48 0.00 0.00 994075.82 44087.85 911307.87 00:08:24.351 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x0 length 0x8000 00:08:24.351 Nvme2n1 : 5.84 120.53 7.53 0.00 0.00 968009.88 58148.31 983754.94 00:08:24.351 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x8000 length 0x8000 00:08:24.351 Nvme2n1 : 5.84 117.72 7.36 0.00 0.00 986374.99 89128.96 1387933.32 00:08:24.351 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x0 length 0x8000 00:08:24.351 Nvme2n2 : 5.93 126.63 7.91 0.00 0.00 898675.79 30027.40 1006632.96 00:08:24.351 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x8000 length 0x8000 00:08:24.351 Nvme2n2 : 5.91 119.51 7.47 0.00 0.00 935103.53 60293.12 1418437.35 00:08:24.351 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x0 length 0x8000 00:08:24.351 Nvme2n3 : 5.93 129.44 8.09 0.00 0.00 854629.78 56003.49 1037136.99 00:08:24.351 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x8000 length 0x8000 00:08:24.351 Nvme2n3 : 5.98 131.51 8.22 0.00 0.00 826419.57 24069.59 1456567.39 00:08:24.351 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x0 length 0x2000 00:08:24.351 Nvme3n1 : 5.94 140.06 8.75 0.00 0.00 767413.17 1541.59 1067641.02 00:08:24.351 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.351 Verification LBA range: start 0x2000 length 0x2000 00:08:24.351 Nvme3n1 : 5.99 153.48 9.59 0.00 0.00 689315.86 2204.39 1288795.23 00:08:24.351 =================================================================================================================== 00:08:24.351 Total : 1500.95 93.81 0.00 0.00 913234.72 1541.59 1456567.39 00:08:24.351 00:08:24.351 real 0m7.299s 00:08:24.351 user 0m13.748s 00:08:24.351 sys 0m0.250s 00:08:24.351 18:15:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.351 18:15:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:24.351 ************************************ 00:08:24.351 END TEST bdev_verify_big_io 00:08:24.351 ************************************ 00:08:24.351 18:15:10 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:24.351 18:15:10 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.351 18:15:10 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:24.351 18:15:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.351 18:15:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.351 ************************************ 00:08:24.351 START TEST bdev_write_zeroes 00:08:24.351 ************************************ 00:08:24.351 18:15:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:24.351 [2024-07-11 18:15:10.650265] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:24.351 [2024-07-11 18:15:10.650505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78038 ] 00:08:24.611 [2024-07-11 18:15:10.804359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.611 [2024-07-11 18:15:10.845152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.869 Running I/O for 1 seconds... 00:08:26.241 00:08:26.241 Latency(us) 00:08:26.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.241 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.241 Nvme0n1 : 1.02 8026.65 31.35 0.00 0.00 15889.40 6136.55 71970.44 00:08:26.242 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.242 Nvme1n1 : 1.02 8294.98 32.40 0.00 0.00 15348.51 11617.75 50045.67 00:08:26.242 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.242 Nvme2n1 : 1.02 8345.27 32.60 0.00 0.00 15202.29 10604.92 50522.30 00:08:26.242 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.242 Nvme2n2 : 1.02 8320.15 32.50 0.00 0.00 15220.97 7298.33 49807.36 00:08:26.242 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.242 Nvme2n3 : 1.02 8307.71 32.45 0.00 0.00 15215.84 6613.18 49807.36 00:08:26.242 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:26.242 Nvme3n1 : 1.03 8294.82 32.40 0.00 0.00 15211.49 6553.60 51475.55 00:08:26.242 =================================================================================================================== 00:08:26.242 Total : 49589.57 193.71 0.00 0.00 15344.34 6136.55 71970.44 00:08:26.242 00:08:26.242 real 0m1.989s 00:08:26.242 user 0m1.654s 00:08:26.242 sys 0m0.214s 00:08:26.242 18:15:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.242 18:15:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 ************************************ 00:08:26.242 END TEST bdev_write_zeroes 00:08:26.242 ************************************ 00:08:26.242 18:15:12 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:26.242 18:15:12 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.242 18:15:12 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:26.242 18:15:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.242 18:15:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.242 ************************************ 00:08:26.242 START TEST bdev_json_nonenclosed 00:08:26.242 ************************************ 00:08:26.242 18:15:12 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.500 [2024-07-11 18:15:12.691676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:26.500 [2024-07-11 18:15:12.691883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78080 ] 00:08:26.500 [2024-07-11 18:15:12.841565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.500 [2024-07-11 18:15:12.888134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.500 [2024-07-11 18:15:12.888275] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:26.500 [2024-07-11 18:15:12.888319] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.500 [2024-07-11 18:15:12.888347] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.759 00:08:26.759 real 0m0.414s 00:08:26.759 user 0m0.194s 00:08:26.759 sys 0m0.116s 00:08:26.759 18:15:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:26.759 18:15:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.759 18:15:13 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 ************************************ 00:08:26.759 END TEST bdev_json_nonenclosed 00:08:26.759 ************************************ 00:08:26.759 18:15:13 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:26.759 18:15:13 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:08:26.759 18:15:13 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.759 18:15:13 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:26.759 18:15:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.759 18:15:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.759 ************************************ 00:08:26.759 START TEST bdev_json_nonarray 00:08:26.759 ************************************ 00:08:26.759 18:15:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.759 [2024-07-11 18:15:13.166235] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:26.759 [2024-07-11 18:15:13.166438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78111 ] 00:08:27.017 [2024-07-11 18:15:13.318058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.017 [2024-07-11 18:15:13.365635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.017 [2024-07-11 18:15:13.365776] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:27.017 [2024-07-11 18:15:13.365848] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:27.017 [2024-07-11 18:15:13.365870] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.275 00:08:27.275 real 0m0.418s 00:08:27.275 user 0m0.196s 00:08:27.275 sys 0m0.117s 00:08:27.275 18:15:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:27.275 18:15:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.275 18:15:13 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:27.275 ************************************ 00:08:27.275 END TEST bdev_json_nonarray 00:08:27.275 ************************************ 00:08:27.275 18:15:13 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:27.275 18:15:13 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:27.275 00:08:27.275 real 0m33.025s 00:08:27.275 user 0m51.447s 00:08:27.275 sys 0m6.240s 00:08:27.275 18:15:13 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.275 18:15:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.275 ************************************ 00:08:27.275 END TEST blockdev_nvme 00:08:27.275 ************************************ 00:08:27.275 18:15:13 -- common/autotest_common.sh@1142 -- # return 0 00:08:27.275 18:15:13 -- spdk/autotest.sh@213 -- # uname -s 00:08:27.275 18:15:13 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:27.275 18:15:13 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:27.275 18:15:13 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.275 18:15:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.275 18:15:13 -- common/autotest_common.sh@10 -- # set +x 00:08:27.275 ************************************ 00:08:27.275 START TEST blockdev_nvme_gpt 00:08:27.275 ************************************ 00:08:27.275 18:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:27.275 * Looking for test storage... 00:08:27.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:27.275 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78181 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:27.534 18:15:13 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78181 00:08:27.534 18:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 78181 ']' 00:08:27.534 18:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.534 18:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.534 18:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.534 18:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.534 18:15:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.534 [2024-07-11 18:15:13.810264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:27.534 [2024-07-11 18:15:13.810459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78181 ] 00:08:27.793 [2024-07-11 18:15:13.963730] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.793 [2024-07-11 18:15:14.010537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.793 18:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.793 18:15:14 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:08:27.793 18:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:27.793 18:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:27.793 18:15:14 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:28.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:28.360 Waiting for block devices as requested 00:08:28.618 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.618 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.618 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:28.876 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.162 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:08:34.162 BYT; 00:08:34.162 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:08:34.162 BYT; 00:08:34.162 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:34.162 18:15:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:34.162 18:15:20 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:08:35.096 The operation has completed successfully. 00:08:35.096 18:15:21 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:08:36.041 The operation has completed successfully. 00:08:36.041 18:15:22 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:36.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:37.192 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.192 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.192 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.192 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:37.192 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:08:37.192 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.192 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.192 [] 00:08:37.192 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.192 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:08:37.192 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:37.192 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:37.192 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:37.192 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:37.192 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.192 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.451 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.451 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:08:37.451 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.451 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.451 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.711 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:37.711 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.711 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.711 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:37.711 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:37.711 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:37.711 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.711 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.711 18:15:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.711 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:37.711 18:15:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:37.712 18:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "095370f3-2dd0-45fc-8f9f-7ac43a4d9cdd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "095370f3-2dd0-45fc-8f9f-7ac43a4d9cdd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "71c2b702-5b0b-46e6-83f5-dcd342d2e1dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "71c2b702-5b0b-46e6-83f5-dcd342d2e1dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1ed7457c-9a2d-4d09-affb-d4f9aceb6977"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1ed7457c-9a2d-4d09-affb-d4f9aceb6977",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "59c1c79c-1a2d-468f-96f3-5ca7c7dcbb7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59c1c79c-1a2d-468f-96f3-5ca7c7dcbb7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "45efc722-242a-4a37-9f8a-c58a5c71d4fe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "45efc722-242a-4a37-9f8a-c58a5c71d4fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:37.712 18:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:37.712 18:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:08:37.712 18:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:37.712 18:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 78181 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 78181 ']' 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 78181 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78181 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78181' 00:08:37.712 killing process with pid 78181 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 78181 00:08:37.712 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 78181 00:08:37.971 18:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:37.971 18:15:24 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:37.971 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:37.971 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.971 18:15:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.971 ************************************ 00:08:37.971 START TEST bdev_hello_world 00:08:37.971 ************************************ 00:08:37.971 18:15:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:38.229 [2024-07-11 18:15:24.470708] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:38.229 [2024-07-11 18:15:24.470886] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78782 ] 00:08:38.229 [2024-07-11 18:15:24.616319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.488 [2024-07-11 18:15:24.660302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.747 [2024-07-11 18:15:25.039159] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:38.747 [2024-07-11 18:15:25.039249] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:38.747 [2024-07-11 18:15:25.039300] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:38.747 [2024-07-11 18:15:25.041885] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:38.747 [2024-07-11 18:15:25.042573] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:38.747 [2024-07-11 18:15:25.042639] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:38.747 [2024-07-11 18:15:25.042918] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:38.747 00:08:38.747 [2024-07-11 18:15:25.042970] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:39.006 00:08:39.006 real 0m0.854s 00:08:39.006 user 0m0.551s 00:08:39.006 sys 0m0.198s 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:39.006 ************************************ 00:08:39.006 END TEST bdev_hello_world 00:08:39.006 ************************************ 00:08:39.006 18:15:25 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:39.006 18:15:25 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:39.006 18:15:25 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.006 18:15:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.006 18:15:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:39.006 ************************************ 00:08:39.006 START TEST bdev_bounds 00:08:39.006 ************************************ 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=78813 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:39.006 Process bdevio pid: 78813 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 78813' 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 78813 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 78813 ']' 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.006 18:15:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:39.006 [2024-07-11 18:15:25.376739] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:39.006 [2024-07-11 18:15:25.376933] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78813 ] 00:08:39.265 [2024-07-11 18:15:25.519178] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.265 [2024-07-11 18:15:25.555922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.265 [2024-07-11 18:15:25.556006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.265 [2024-07-11 18:15:25.556074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.203 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.203 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:40.203 18:15:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:40.203 I/O targets: 00:08:40.203 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:40.203 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:40.203 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:40.203 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.203 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.203 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:40.203 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:40.203 00:08:40.203 00:08:40.203 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.203 http://cunit.sourceforge.net/ 00:08:40.203 00:08:40.203 00:08:40.203 Suite: bdevio tests on: Nvme3n1 00:08:40.203 Test: blockdev write read block ...passed 00:08:40.203 Test: blockdev write zeroes read block ...passed 00:08:40.203 Test: blockdev write zeroes read no split ...passed 00:08:40.203 Test: blockdev write zeroes read split ...passed 00:08:40.203 Test: blockdev write zeroes read split partial ...passed 00:08:40.203 Test: blockdev reset ...[2024-07-11 18:15:26.389624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:40.203 passed 00:08:40.203 Test: blockdev write read 8 blocks ...[2024-07-11 18:15:26.392110] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.203 passed 00:08:40.203 Test: blockdev write read size > 128k ...passed 00:08:40.203 Test: blockdev write read invalid size ...passed 00:08:40.203 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.203 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.203 Test: blockdev write read max offset ...passed 00:08:40.203 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.203 Test: blockdev writev readv 8 blocks ...passed 00:08:40.203 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.203 Test: blockdev writev readv block ...passed 00:08:40.203 Test: blockdev writev readv size > 128k ...passed 00:08:40.203 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.203 Test: blockdev comparev and writev ...[2024-07-11 18:15:26.398678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b1204000 len:0x1000 00:08:40.204 [2024-07-11 18:15:26.398770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev nvme passthru rw ...passed 00:08:40.204 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.204 Test: blockdev nvme admin passthru ...[2024-07-11 18:15:26.399666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.204 [2024-07-11 18:15:26.399735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev copy ...passed 00:08:40.204 Suite: bdevio tests on: Nvme2n3 00:08:40.204 Test: blockdev write read block ...passed 00:08:40.204 Test: blockdev write zeroes read block ...passed 00:08:40.204 Test: blockdev write zeroes read no split ...passed 00:08:40.204 Test: blockdev write zeroes read split ...passed 00:08:40.204 Test: blockdev write zeroes read split partial ...passed 00:08:40.204 Test: blockdev reset ...[2024-07-11 18:15:26.410977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.204 passed 00:08:40.204 Test: blockdev write read 8 blocks ...[2024-07-11 18:15:26.413733] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.204 passed 00:08:40.204 Test: blockdev write read size > 128k ...passed 00:08:40.204 Test: blockdev write read invalid size ...passed 00:08:40.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.204 Test: blockdev write read max offset ...passed 00:08:40.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.204 Test: blockdev writev readv 8 blocks ...passed 00:08:40.204 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.204 Test: blockdev writev readv block ...passed 00:08:40.204 Test: blockdev writev readv size > 128k ...passed 00:08:40.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.204 Test: blockdev comparev and writev ...[2024-07-11 18:15:26.419835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d423d000 len:0x1000 00:08:40.204 [2024-07-11 18:15:26.419893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev nvme passthru rw ...passed 00:08:40.204 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.204 Test: blockdev nvme admin passthru ...[2024-07-11 18:15:26.420662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.204 [2024-07-11 18:15:26.420711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev copy ...passed 00:08:40.204 Suite: bdevio tests on: Nvme2n2 00:08:40.204 Test: blockdev write read block ...passed 00:08:40.204 Test: blockdev write zeroes read block ...passed 00:08:40.204 Test: blockdev write zeroes read no split ...passed 00:08:40.204 Test: blockdev write zeroes read split ...passed 00:08:40.204 Test: blockdev write zeroes read split partial ...passed 00:08:40.204 Test: blockdev reset ...[2024-07-11 18:15:26.432733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.204 passed 00:08:40.204 Test: blockdev write read 8 blocks ...[2024-07-11 18:15:26.435506] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.204 passed 00:08:40.204 Test: blockdev write read size > 128k ...passed 00:08:40.204 Test: blockdev write read invalid size ...passed 00:08:40.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.204 Test: blockdev write read max offset ...passed 00:08:40.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.204 Test: blockdev writev readv 8 blocks ...passed 00:08:40.204 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.204 Test: blockdev writev readv block ...passed 00:08:40.204 Test: blockdev writev readv size > 128k ...passed 00:08:40.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.204 Test: blockdev comparev and writev ...[2024-07-11 18:15:26.441458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4239000 len:0x1000 00:08:40.204 [2024-07-11 18:15:26.441545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev nvme passthru rw ...passed 00:08:40.204 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.204 Test: blockdev nvme admin passthru ...[2024-07-11 18:15:26.442359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.204 [2024-07-11 18:15:26.442438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev copy ...passed 00:08:40.204 Suite: bdevio tests on: Nvme2n1 00:08:40.204 Test: blockdev write read block ...passed 00:08:40.204 Test: blockdev write zeroes read block ...passed 00:08:40.204 Test: blockdev write zeroes read no split ...passed 00:08:40.204 Test: blockdev write zeroes read split ...passed 00:08:40.204 Test: blockdev write zeroes read split partial ...passed 00:08:40.204 Test: blockdev reset ...[2024-07-11 18:15:26.454136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:40.204 passed 00:08:40.204 Test: blockdev write read 8 blocks ...[2024-07-11 18:15:26.456594] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.204 passed 00:08:40.204 Test: blockdev write read size > 128k ...passed 00:08:40.204 Test: blockdev write read invalid size ...passed 00:08:40.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.204 Test: blockdev write read max offset ...passed 00:08:40.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.204 Test: blockdev writev readv 8 blocks ...passed 00:08:40.204 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.204 Test: blockdev writev readv block ...passed 00:08:40.204 Test: blockdev writev readv size > 128k ...passed 00:08:40.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.204 Test: blockdev comparev and writev ...[2024-07-11 18:15:26.463009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4235000 len:0x1000 00:08:40.204 [2024-07-11 18:15:26.463080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev nvme passthru rw ...passed 00:08:40.204 Test: blockdev nvme passthru vendor specific ...[2024-07-11 18:15:26.463812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.204 passed 00:08:40.204 Test: blockdev nvme admin passthru ...[2024-07-11 18:15:26.463883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev copy ...passed 00:08:40.204 Suite: bdevio tests on: Nvme1n1 00:08:40.204 Test: blockdev write read block ...passed 00:08:40.204 Test: blockdev write zeroes read block ...passed 00:08:40.204 Test: blockdev write zeroes read no split ...passed 00:08:40.204 Test: blockdev write zeroes read split ...passed 00:08:40.204 Test: blockdev write zeroes read split partial ...passed 00:08:40.204 Test: blockdev reset ...[2024-07-11 18:15:26.475996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:40.204 passed 00:08:40.204 Test: blockdev write read 8 blocks ...[2024-07-11 18:15:26.478084] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.204 passed 00:08:40.204 Test: blockdev write read size > 128k ...passed 00:08:40.204 Test: blockdev write read invalid size ...passed 00:08:40.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.204 Test: blockdev write read max offset ...passed 00:08:40.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.204 Test: blockdev writev readv 8 blocks ...passed 00:08:40.204 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.204 Test: blockdev writev readv block ...passed 00:08:40.204 Test: blockdev writev readv size > 128k ...passed 00:08:40.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.204 Test: blockdev comparev and writev ...[2024-07-11 18:15:26.484410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c320e000 len:0x1000 00:08:40.204 [2024-07-11 18:15:26.484480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev nvme passthru rw ...passed 00:08:40.204 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.204 Test: blockdev nvme admin passthru ...[2024-07-11 18:15:26.485318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:40.204 [2024-07-11 18:15:26.485382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:40.204 passed 00:08:40.204 Test: blockdev copy ...passed 00:08:40.204 Suite: bdevio tests on: Nvme0n1p2 00:08:40.204 Test: blockdev write read block ...passed 00:08:40.204 Test: blockdev write zeroes read block ...passed 00:08:40.204 Test: blockdev write zeroes read no split ...passed 00:08:40.204 Test: blockdev write zeroes read split ...passed 00:08:40.204 Test: blockdev write zeroes read split partial ...passed 00:08:40.204 Test: blockdev reset ...[2024-07-11 18:15:26.499288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:40.204 passed 00:08:40.204 Test: blockdev write read 8 blocks ...[2024-07-11 18:15:26.501485] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.204 passed 00:08:40.204 Test: blockdev write read size > 128k ...passed 00:08:40.204 Test: blockdev write read invalid size ...passed 00:08:40.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.204 Test: blockdev write read max offset ...passed 00:08:40.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.204 Test: blockdev writev readv 8 blocks ...passed 00:08:40.204 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.204 Test: blockdev writev readv block ...passed 00:08:40.204 Test: blockdev writev readv size > 128k ...passed 00:08:40.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.204 Test: blockdev comparev and writev ...passed 00:08:40.204 Test: blockdev nvme passthru rw ...passed 00:08:40.204 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.204 Test: blockdev nvme admin passthru ...passed 00:08:40.204 Test: blockdev copy ...[2024-07-11 18:15:26.506957] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:40.204 separate metadata which is not supported yet. 00:08:40.204 passed 00:08:40.204 Suite: bdevio tests on: Nvme0n1p1 00:08:40.204 Test: blockdev write read block ...passed 00:08:40.204 Test: blockdev write zeroes read block ...passed 00:08:40.204 Test: blockdev write zeroes read no split ...passed 00:08:40.204 Test: blockdev write zeroes read split ...passed 00:08:40.205 Test: blockdev write zeroes read split partial ...passed 00:08:40.205 Test: blockdev reset ...[2024-07-11 18:15:26.518980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:40.205 passed 00:08:40.205 Test: blockdev write read 8 blocks ...[2024-07-11 18:15:26.521194] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:40.205 passed 00:08:40.205 Test: blockdev write read size > 128k ...passed 00:08:40.205 Test: blockdev write read invalid size ...passed 00:08:40.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:40.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:40.205 Test: blockdev write read max offset ...passed 00:08:40.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:40.205 Test: blockdev writev readv 8 blocks ...passed 00:08:40.205 Test: blockdev writev readv 30 x 1block ...passed 00:08:40.205 Test: blockdev writev readv block ...passed 00:08:40.205 Test: blockdev writev readv size > 128k ...passed 00:08:40.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:40.205 Test: blockdev comparev and writev ...passed 00:08:40.205 Test: blockdev nvme passthru rw ...passed 00:08:40.205 Test: blockdev nvme passthru vendor specific ...passed 00:08:40.205 Test: blockdev nvme admin passthru ...passed 00:08:40.205 Test: blockdev copy ...[2024-07-11 18:15:26.526815] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:40.205 separate metadata which is not supported yet. 00:08:40.205 passed 00:08:40.205 00:08:40.205 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.205 suites 7 7 n/a 0 0 00:08:40.205 tests 161 161 161 0 0 00:08:40.205 asserts 1006 1006 1006 0 n/a 00:08:40.205 00:08:40.205 Elapsed time = 0.345 seconds 00:08:40.205 0 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 78813 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 78813 ']' 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 78813 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78813 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78813' 00:08:40.205 killing process with pid 78813 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 78813 00:08:40.205 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 78813 00:08:40.465 18:15:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:40.465 00:08:40.465 real 0m1.468s 00:08:40.465 user 0m3.712s 00:08:40.465 sys 0m0.298s 00:08:40.465 ************************************ 00:08:40.465 END TEST bdev_bounds 00:08:40.465 ************************************ 00:08:40.465 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.465 18:15:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:40.465 18:15:26 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:40.465 18:15:26 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.465 18:15:26 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:40.465 18:15:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.465 18:15:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:40.465 ************************************ 00:08:40.465 START TEST bdev_nbd 00:08:40.465 ************************************ 00:08:40.465 18:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:40.465 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:40.465 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:40.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=78861 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 78861 /var/tmp/spdk-nbd.sock 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 78861 ']' 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.466 18:15:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:40.725 [2024-07-11 18:15:26.903359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:40.725 [2024-07-11 18:15:26.903556] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.725 [2024-07-11 18:15:27.049895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.725 [2024-07-11 18:15:27.086022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:41.662 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:41.663 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.663 18:15:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:41.663 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:41.663 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.922 1+0 records in 00:08:41.922 1+0 records out 00:08:41.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105338 s, 3.9 MB/s 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:41.922 1+0 records in 00:08:41.922 1+0 records out 00:08:41.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770408 s, 5.3 MB/s 00:08:41.922 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.181 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:42.181 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.181 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:42.181 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:42.181 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.181 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.181 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.440 1+0 records in 00:08:42.440 1+0 records out 00:08:42.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655218 s, 6.3 MB/s 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.440 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.700 1+0 records in 00:08:42.700 1+0 records out 00:08:42.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749577 s, 5.5 MB/s 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.700 18:15:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:42.959 1+0 records in 00:08:42.959 1+0 records out 00:08:42.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923263 s, 4.4 MB/s 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:42.959 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:43.218 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.219 1+0 records in 00:08:43.219 1+0 records out 00:08:43.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968577 s, 4.2 MB/s 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.219 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:43.477 1+0 records in 00:08:43.477 1+0 records out 00:08:43.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780069 s, 5.3 MB/s 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:43.477 18:15:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd0", 00:08:43.735 "bdev_name": "Nvme0n1p1" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd1", 00:08:43.735 "bdev_name": "Nvme0n1p2" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd2", 00:08:43.735 "bdev_name": "Nvme1n1" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd3", 00:08:43.735 "bdev_name": "Nvme2n1" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd4", 00:08:43.735 "bdev_name": "Nvme2n2" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd5", 00:08:43.735 "bdev_name": "Nvme2n3" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd6", 00:08:43.735 "bdev_name": "Nvme3n1" 00:08:43.735 } 00:08:43.735 ]' 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd0", 00:08:43.735 "bdev_name": "Nvme0n1p1" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd1", 00:08:43.735 "bdev_name": "Nvme0n1p2" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd2", 00:08:43.735 "bdev_name": "Nvme1n1" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd3", 00:08:43.735 "bdev_name": "Nvme2n1" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd4", 00:08:43.735 "bdev_name": "Nvme2n2" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd5", 00:08:43.735 "bdev_name": "Nvme2n3" 00:08:43.735 }, 00:08:43.735 { 00:08:43.735 "nbd_device": "/dev/nbd6", 00:08:43.735 "bdev_name": "Nvme3n1" 00:08:43.735 } 00:08:43.735 ]' 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.735 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:43.994 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:43.994 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:43.994 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:43.994 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:43.994 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:43.995 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:43.995 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:43.995 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:43.995 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:43.995 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.254 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.513 18:15:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.772 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.031 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.032 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.291 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.550 18:15:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:45.809 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:45.810 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:46.069 /dev/nbd0 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.069 1+0 records in 00:08:46.069 1+0 records out 00:08:46.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815551 s, 5.0 MB/s 00:08:46.069 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:46.329 /dev/nbd1 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.329 1+0 records in 00:08:46.329 1+0 records out 00:08:46.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611209 s, 6.7 MB/s 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.329 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:46.589 /dev/nbd10 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.589 1+0 records in 00:08:46.589 1+0 records out 00:08:46.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060109 s, 6.8 MB/s 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.589 18:15:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:46.846 /dev/nbd11 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:46.846 1+0 records in 00:08:46.846 1+0 records out 00:08:46.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462029 s, 8.9 MB/s 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:46.846 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:47.105 /dev/nbd12 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.105 1+0 records in 00:08:47.105 1+0 records out 00:08:47.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071476 s, 5.7 MB/s 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.105 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:47.363 /dev/nbd13 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.622 1+0 records in 00:08:47.622 1+0 records out 00:08:47.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763186 s, 5.4 MB/s 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.622 18:15:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:47.882 /dev/nbd14 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:47.882 1+0 records in 00:08:47.882 1+0 records out 00:08:47.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000924322 s, 4.4 MB/s 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.882 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.141 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd0", 00:08:48.142 "bdev_name": "Nvme0n1p1" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd1", 00:08:48.142 "bdev_name": "Nvme0n1p2" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd10", 00:08:48.142 "bdev_name": "Nvme1n1" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd11", 00:08:48.142 "bdev_name": "Nvme2n1" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd12", 00:08:48.142 "bdev_name": "Nvme2n2" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd13", 00:08:48.142 "bdev_name": "Nvme2n3" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd14", 00:08:48.142 "bdev_name": "Nvme3n1" 00:08:48.142 } 00:08:48.142 ]' 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd0", 00:08:48.142 "bdev_name": "Nvme0n1p1" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd1", 00:08:48.142 "bdev_name": "Nvme0n1p2" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd10", 00:08:48.142 "bdev_name": "Nvme1n1" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd11", 00:08:48.142 "bdev_name": "Nvme2n1" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd12", 00:08:48.142 "bdev_name": "Nvme2n2" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd13", 00:08:48.142 "bdev_name": "Nvme2n3" 00:08:48.142 }, 00:08:48.142 { 00:08:48.142 "nbd_device": "/dev/nbd14", 00:08:48.142 "bdev_name": "Nvme3n1" 00:08:48.142 } 00:08:48.142 ]' 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.142 /dev/nbd1 00:08:48.142 /dev/nbd10 00:08:48.142 /dev/nbd11 00:08:48.142 /dev/nbd12 00:08:48.142 /dev/nbd13 00:08:48.142 /dev/nbd14' 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.142 /dev/nbd1 00:08:48.142 /dev/nbd10 00:08:48.142 /dev/nbd11 00:08:48.142 /dev/nbd12 00:08:48.142 /dev/nbd13 00:08:48.142 /dev/nbd14' 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:48.142 256+0 records in 00:08:48.142 256+0 records out 00:08:48.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00667674 s, 157 MB/s 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.142 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.401 256+0 records in 00:08:48.401 256+0 records out 00:08:48.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.199174 s, 5.3 MB/s 00:08:48.401 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.401 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.401 256+0 records in 00:08:48.401 256+0 records out 00:08:48.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135196 s, 7.8 MB/s 00:08:48.401 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.401 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:48.660 256+0 records in 00:08:48.660 256+0 records out 00:08:48.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185809 s, 5.6 MB/s 00:08:48.660 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.660 18:15:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:48.919 256+0 records in 00:08:48.919 256+0 records out 00:08:48.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198051 s, 5.3 MB/s 00:08:48.919 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.919 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:48.919 256+0 records in 00:08:48.919 256+0 records out 00:08:48.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180205 s, 5.8 MB/s 00:08:48.920 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.920 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:49.180 256+0 records in 00:08:49.180 256+0 records out 00:08:49.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187574 s, 5.6 MB/s 00:08:49.180 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.180 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:49.439 256+0 records in 00:08:49.439 256+0 records out 00:08:49.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1872 s, 5.6 MB/s 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.439 18:15:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.697 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.956 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.523 18:15:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.781 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.040 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.298 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.556 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:51.556 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:51.556 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.815 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:51.815 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:51.815 18:15:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:51.815 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:51.815 malloc_lvol_verify 00:08:52.073 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:52.073 981fd261-af19-4a04-9744-7cd9b500a9af 00:08:52.073 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:52.339 476b51a4-1d4e-4816-a87f-ddd2de7d503f 00:08:52.339 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:52.616 /dev/nbd0 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:52.616 mke2fs 1.46.5 (30-Dec-2021) 00:08:52.616 Discarding device blocks: 0/4096 done 00:08:52.616 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:52.616 00:08:52.616 Allocating group tables: 0/1 done 00:08:52.616 Writing inode tables: 0/1 done 00:08:52.616 Creating journal (1024 blocks): done 00:08:52.616 Writing superblocks and filesystem accounting information: 0/1 done 00:08:52.616 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.616 18:15:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 78861 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 78861 ']' 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 78861 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78861 00:08:52.889 killing process with pid 78861 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78861' 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 78861 00:08:52.889 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 78861 00:08:53.147 ************************************ 00:08:53.147 END TEST bdev_nbd 00:08:53.147 ************************************ 00:08:53.147 18:15:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:53.147 00:08:53.147 real 0m12.644s 00:08:53.147 user 0m18.147s 00:08:53.147 sys 0m4.423s 00:08:53.147 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.147 18:15:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:53.147 18:15:39 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:53.147 18:15:39 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:53.147 18:15:39 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:08:53.147 18:15:39 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:08:53.147 skipping fio tests on NVMe due to multi-ns failures. 00:08:53.147 18:15:39 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:53.147 18:15:39 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:53.147 18:15:39 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.147 18:15:39 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:53.147 18:15:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.147 18:15:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:53.147 ************************************ 00:08:53.147 START TEST bdev_verify 00:08:53.147 ************************************ 00:08:53.147 18:15:39 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:53.404 [2024-07-11 18:15:39.588583] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:53.404 [2024-07-11 18:15:39.588798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79294 ] 00:08:53.404 [2024-07-11 18:15:39.737304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.404 [2024-07-11 18:15:39.773011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.404 [2024-07-11 18:15:39.773072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.994 Running I/O for 5 seconds... 00:08:59.262 00:08:59.262 Latency(us) 00:08:59.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.262 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x0 length 0x5e800 00:08:59.262 Nvme0n1p1 : 5.07 1263.20 4.93 0.00 0.00 101096.94 18945.86 107240.73 00:08:59.262 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x5e800 length 0x5e800 00:08:59.262 Nvme0n1p1 : 5.07 1237.16 4.83 0.00 0.00 103206.63 18230.92 93895.21 00:08:59.262 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x0 length 0x5e7ff 00:08:59.262 Nvme0n1p2 : 5.07 1262.81 4.93 0.00 0.00 100960.44 18826.71 103427.72 00:08:59.262 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:59.262 Nvme0n1p2 : 5.07 1236.40 4.83 0.00 0.00 103013.32 19660.80 88652.33 00:08:59.262 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x0 length 0xa0000 00:08:59.262 Nvme1n1 : 5.07 1262.38 4.93 0.00 0.00 100828.92 18469.24 104857.60 00:08:59.262 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0xa0000 length 0xa0000 00:08:59.262 Nvme1n1 : 5.08 1235.70 4.83 0.00 0.00 102816.85 20852.36 87222.46 00:08:59.262 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x0 length 0x80000 00:08:59.262 Nvme2n1 : 5.07 1261.64 4.93 0.00 0.00 100670.08 18469.24 105334.23 00:08:59.262 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x80000 length 0x80000 00:08:59.262 Nvme2n1 : 5.08 1234.97 4.82 0.00 0.00 102619.67 21805.61 88652.33 00:08:59.262 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x0 length 0x80000 00:08:59.262 Nvme2n2 : 5.08 1260.92 4.93 0.00 0.00 100507.93 19660.80 107717.35 00:08:59.262 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x80000 length 0x80000 00:08:59.262 Nvme2n2 : 5.08 1234.26 4.82 0.00 0.00 102426.74 20971.52 90558.84 00:08:59.262 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x0 length 0x80000 00:08:59.262 Nvme2n3 : 5.08 1260.18 4.92 0.00 0.00 100333.88 18826.71 108670.60 00:08:59.262 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x80000 length 0x80000 00:08:59.262 Nvme2n3 : 5.08 1233.60 4.82 0.00 0.00 102221.60 19541.64 92941.96 00:08:59.262 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x0 length 0x20000 00:08:59.262 Nvme3n1 : 5.08 1259.46 4.92 0.00 0.00 100186.16 12630.57 109147.23 00:08:59.262 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:59.262 Verification LBA range: start 0x20000 length 0x20000 00:08:59.262 Nvme3n1 : 5.10 1255.56 4.90 0.00 0.00 100450.69 3872.58 94848.47 00:08:59.262 =================================================================================================================== 00:08:59.262 Total : 17498.24 68.35 0.00 0.00 101513.96 3872.58 109147.23 00:08:59.521 00:08:59.521 real 0m6.223s 00:08:59.521 user 0m11.628s 00:08:59.521 sys 0m0.230s 00:08:59.521 18:15:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.521 ************************************ 00:08:59.521 END TEST bdev_verify 00:08:59.521 ************************************ 00:08:59.521 18:15:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:59.521 18:15:45 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:59.521 18:15:45 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:59.521 18:15:45 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:59.521 18:15:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.521 18:15:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.521 ************************************ 00:08:59.521 START TEST bdev_verify_big_io 00:08:59.521 ************************************ 00:08:59.521 18:15:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:59.521 [2024-07-11 18:15:45.870295] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:59.521 [2024-07-11 18:15:45.870506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79376 ] 00:08:59.780 [2024-07-11 18:15:46.021569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.780 [2024-07-11 18:15:46.065763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.780 [2024-07-11 18:15:46.065828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.347 Running I/O for 5 seconds... 00:09:06.910 00:09:06.910 Latency(us) 00:09:06.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.911 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x0 length 0x5e80 00:09:06.911 Nvme0n1p1 : 5.92 98.46 6.15 0.00 0.00 1228493.52 27286.81 1731103.65 00:09:06.911 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x5e80 length 0x5e80 00:09:06.911 Nvme0n1p1 : 5.99 99.23 6.20 0.00 0.00 1190474.78 32172.22 1807363.72 00:09:06.911 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x0 length 0x5e7f 00:09:06.911 Nvme0n1p2 : 5.78 101.15 6.32 0.00 0.00 1175068.32 88175.71 1502323.43 00:09:06.911 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:06.911 Nvme0n1p2 : 5.99 99.20 6.20 0.00 0.00 1153616.55 53620.36 1837867.75 00:09:06.911 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x0 length 0xa000 00:09:06.911 Nvme1n1 : 6.09 73.60 4.60 0.00 0.00 1575889.72 142034.39 2104778.01 00:09:06.911 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0xa000 length 0xa000 00:09:06.911 Nvme1n1 : 5.99 103.33 6.46 0.00 0.00 1084498.85 72923.69 1860745.77 00:09:06.911 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x0 length 0x8000 00:09:06.911 Nvme2n1 : 5.99 114.86 7.18 0.00 0.00 986580.05 133455.13 1052389.00 00:09:06.911 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x8000 length 0x8000 00:09:06.911 Nvme2n1 : 6.06 107.86 6.74 0.00 0.00 1011681.62 68634.07 1906501.82 00:09:06.911 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x0 length 0x8000 00:09:06.911 Nvme2n2 : 6.08 122.29 7.64 0.00 0.00 901426.87 37415.10 1075267.03 00:09:06.911 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x8000 length 0x8000 00:09:06.911 Nvme2n2 : 6.12 117.85 7.37 0.00 0.00 900881.94 16086.11 1944631.85 00:09:06.911 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x0 length 0x8000 00:09:06.911 Nvme2n3 : 6.08 126.36 7.90 0.00 0.00 849545.31 45279.42 1105771.05 00:09:06.911 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x8000 length 0x8000 00:09:06.911 Nvme2n3 : 6.14 138.67 8.67 0.00 0.00 743955.06 2606.55 1982761.89 00:09:06.911 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x0 length 0x2000 00:09:06.911 Nvme3n1 : 6.09 136.62 8.54 0.00 0.00 763380.98 3783.21 1136275.08 00:09:06.911 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:06.911 Verification LBA range: start 0x2000 length 0x2000 00:09:06.911 Nvme3n1 : 5.89 100.04 6.25 0.00 0.00 1235266.00 16801.05 1769233.69 00:09:06.911 =================================================================================================================== 00:09:06.911 Total : 1539.52 96.22 0.00 0.00 1022872.53 2606.55 2104778.01 00:09:06.911 00:09:06.911 real 0m7.480s 00:09:06.911 user 0m14.105s 00:09:06.911 sys 0m0.237s 00:09:06.911 18:15:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.911 18:15:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:06.911 ************************************ 00:09:06.911 END TEST bdev_verify_big_io 00:09:06.911 ************************************ 00:09:06.911 18:15:53 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:06.911 18:15:53 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:06.911 18:15:53 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:06.911 18:15:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.911 18:15:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.911 ************************************ 00:09:06.911 START TEST bdev_write_zeroes 00:09:06.911 ************************************ 00:09:06.911 18:15:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:07.169 [2024-07-11 18:15:53.406232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:07.169 [2024-07-11 18:15:53.406472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79478 ] 00:09:07.169 [2024-07-11 18:15:53.556560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.427 [2024-07-11 18:15:53.592050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.684 Running I/O for 1 seconds... 00:09:09.059 00:09:09.059 Latency(us) 00:09:09.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.059 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.059 Nvme0n1p1 : 1.02 7074.86 27.64 0.00 0.00 18032.30 13762.56 25380.31 00:09:09.059 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.059 Nvme0n1p2 : 1.02 7063.34 27.59 0.00 0.00 18024.33 13285.93 25499.46 00:09:09.059 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.059 Nvme1n1 : 1.03 7052.60 27.55 0.00 0.00 17998.91 13702.98 23473.80 00:09:09.059 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.059 Nvme2n1 : 1.03 7042.16 27.51 0.00 0.00 17918.38 9294.20 23950.43 00:09:09.059 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.059 Nvme2n2 : 1.03 7031.50 27.47 0.00 0.00 17918.51 9294.20 24188.74 00:09:09.059 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.060 Nvme2n3 : 1.03 7021.07 27.43 0.00 0.00 17912.24 9055.88 24546.21 00:09:09.060 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:09.060 Nvme3n1 : 1.03 7010.41 27.38 0.00 0.00 17906.84 8817.57 24427.05 00:09:09.060 =================================================================================================================== 00:09:09.060 Total : 49295.94 192.56 0.00 0.00 17958.79 8817.57 25499.46 00:09:09.060 00:09:09.060 real 0m1.998s 00:09:09.060 user 0m1.675s 00:09:09.060 sys 0m0.206s 00:09:09.060 18:15:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.060 18:15:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:09.060 ************************************ 00:09:09.060 END TEST bdev_write_zeroes 00:09:09.060 ************************************ 00:09:09.060 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:09.060 18:15:55 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.060 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:09.060 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.060 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.060 ************************************ 00:09:09.060 START TEST bdev_json_nonenclosed 00:09:09.060 ************************************ 00:09:09.060 18:15:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.060 [2024-07-11 18:15:55.456013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:09.060 [2024-07-11 18:15:55.456231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79516 ] 00:09:09.319 [2024-07-11 18:15:55.608193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.319 [2024-07-11 18:15:55.652170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.319 [2024-07-11 18:15:55.652334] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:09.319 [2024-07-11 18:15:55.652379] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:09.319 [2024-07-11 18:15:55.652397] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.577 00:09:09.577 real 0m0.407s 00:09:09.577 user 0m0.192s 00:09:09.577 sys 0m0.111s 00:09:09.577 18:15:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:09:09.577 18:15:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.577 18:15:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:09.577 ************************************ 00:09:09.577 END TEST bdev_json_nonenclosed 00:09:09.577 ************************************ 00:09:09.577 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:09.577 18:15:55 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:09:09.577 18:15:55 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.577 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:09.577 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.577 18:15:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.577 ************************************ 00:09:09.577 START TEST bdev_json_nonarray 00:09:09.577 ************************************ 00:09:09.577 18:15:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:09.577 [2024-07-11 18:15:55.917216] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:09.577 [2024-07-11 18:15:55.917451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79547 ] 00:09:09.849 [2024-07-11 18:15:56.064902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.849 [2024-07-11 18:15:56.108116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.849 [2024-07-11 18:15:56.108260] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:09.849 [2024-07-11 18:15:56.108310] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:09.849 [2024-07-11 18:15:56.108329] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:09.849 00:09:09.849 real 0m0.394s 00:09:09.849 user 0m0.175s 00:09:09.849 sys 0m0.116s 00:09:09.849 18:15:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:09:09.849 18:15:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.849 ************************************ 00:09:09.849 END TEST bdev_json_nonarray 00:09:09.849 ************************************ 00:09:09.849 18:15:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:10.124 18:15:56 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:09:10.124 18:15:56 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:09:10.124 18:15:56 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:10.124 18:15:56 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:10.124 18:15:56 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:10.124 18:15:56 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.124 18:15:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.124 18:15:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:10.124 ************************************ 00:09:10.124 START TEST bdev_gpt_uuid 00:09:10.124 ************************************ 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79567 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79567 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 79567 ']' 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.124 18:15:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:10.124 [2024-07-11 18:15:56.391709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:10.124 [2024-07-11 18:15:56.391913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79567 ] 00:09:10.381 [2024-07-11 18:15:56.543147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.381 [2024-07-11 18:15:56.587032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.948 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.948 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:09:10.948 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:10.948 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.948 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.207 Some configs were skipped because the RPC state that can call them passed over. 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.207 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:11.207 { 00:09:11.207 "name": "Nvme0n1p1", 00:09:11.207 "aliases": [ 00:09:11.207 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:11.207 ], 00:09:11.207 "product_name": "GPT Disk", 00:09:11.207 "block_size": 4096, 00:09:11.207 "num_blocks": 774144, 00:09:11.207 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:11.207 "md_size": 64, 00:09:11.207 "md_interleave": false, 00:09:11.207 "dif_type": 0, 00:09:11.207 "assigned_rate_limits": { 00:09:11.207 "rw_ios_per_sec": 0, 00:09:11.207 "rw_mbytes_per_sec": 0, 00:09:11.207 "r_mbytes_per_sec": 0, 00:09:11.207 "w_mbytes_per_sec": 0 00:09:11.207 }, 00:09:11.207 "claimed": false, 00:09:11.207 "zoned": false, 00:09:11.207 "supported_io_types": { 00:09:11.207 "read": true, 00:09:11.207 "write": true, 00:09:11.207 "unmap": true, 00:09:11.207 "flush": true, 00:09:11.207 "reset": true, 00:09:11.207 "nvme_admin": false, 00:09:11.208 "nvme_io": false, 00:09:11.208 "nvme_io_md": false, 00:09:11.208 "write_zeroes": true, 00:09:11.208 "zcopy": false, 00:09:11.208 "get_zone_info": false, 00:09:11.208 "zone_management": false, 00:09:11.208 "zone_append": false, 00:09:11.208 "compare": true, 00:09:11.208 "compare_and_write": false, 00:09:11.208 "abort": true, 00:09:11.208 "seek_hole": false, 00:09:11.208 "seek_data": false, 00:09:11.208 "copy": true, 00:09:11.208 "nvme_iov_md": false 00:09:11.208 }, 00:09:11.208 "driver_specific": { 00:09:11.208 "gpt": { 00:09:11.208 "base_bdev": "Nvme0n1", 00:09:11.208 "offset_blocks": 256, 00:09:11.208 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:11.208 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:11.208 "partition_name": "SPDK_TEST_first" 00:09:11.208 } 00:09:11.208 } 00:09:11.208 } 00:09:11.208 ]' 00:09:11.208 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:11.467 { 00:09:11.467 "name": "Nvme0n1p2", 00:09:11.467 "aliases": [ 00:09:11.467 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:11.467 ], 00:09:11.467 "product_name": "GPT Disk", 00:09:11.467 "block_size": 4096, 00:09:11.467 "num_blocks": 774143, 00:09:11.467 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:11.467 "md_size": 64, 00:09:11.467 "md_interleave": false, 00:09:11.467 "dif_type": 0, 00:09:11.467 "assigned_rate_limits": { 00:09:11.467 "rw_ios_per_sec": 0, 00:09:11.467 "rw_mbytes_per_sec": 0, 00:09:11.467 "r_mbytes_per_sec": 0, 00:09:11.467 "w_mbytes_per_sec": 0 00:09:11.467 }, 00:09:11.467 "claimed": false, 00:09:11.467 "zoned": false, 00:09:11.467 "supported_io_types": { 00:09:11.467 "read": true, 00:09:11.467 "write": true, 00:09:11.467 "unmap": true, 00:09:11.467 "flush": true, 00:09:11.467 "reset": true, 00:09:11.467 "nvme_admin": false, 00:09:11.467 "nvme_io": false, 00:09:11.467 "nvme_io_md": false, 00:09:11.467 "write_zeroes": true, 00:09:11.467 "zcopy": false, 00:09:11.467 "get_zone_info": false, 00:09:11.467 "zone_management": false, 00:09:11.467 "zone_append": false, 00:09:11.467 "compare": true, 00:09:11.467 "compare_and_write": false, 00:09:11.467 "abort": true, 00:09:11.467 "seek_hole": false, 00:09:11.467 "seek_data": false, 00:09:11.467 "copy": true, 00:09:11.467 "nvme_iov_md": false 00:09:11.467 }, 00:09:11.467 "driver_specific": { 00:09:11.467 "gpt": { 00:09:11.467 "base_bdev": "Nvme0n1", 00:09:11.467 "offset_blocks": 774400, 00:09:11.467 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:11.467 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:11.467 "partition_name": "SPDK_TEST_second" 00:09:11.467 } 00:09:11.467 } 00:09:11.467 } 00:09:11.467 ]' 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:11.467 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 79567 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 79567 ']' 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 79567 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79567 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.726 killing process with pid 79567 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79567' 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 79567 00:09:11.726 18:15:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 79567 00:09:11.985 00:09:11.985 real 0m1.956s 00:09:11.985 user 0m2.259s 00:09:11.985 sys 0m0.369s 00:09:11.985 18:15:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.985 18:15:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 ************************************ 00:09:11.985 END TEST bdev_gpt_uuid 00:09:11.985 ************************************ 00:09:11.985 18:15:58 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:11.985 18:15:58 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:12.244 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:12.504 Waiting for block devices as requested 00:09:12.504 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.504 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.763 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:12.763 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.030 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:18.030 18:16:04 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:09:18.030 18:16:04 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:09:18.030 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:18.030 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:18.030 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:18.030 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:09:18.030 18:16:04 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:18.030 00:09:18.030 real 0m50.835s 00:09:18.030 user 1m4.474s 00:09:18.030 sys 0m9.133s 00:09:18.030 18:16:04 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.030 ************************************ 00:09:18.030 END TEST blockdev_nvme_gpt 00:09:18.030 18:16:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:18.030 ************************************ 00:09:18.288 18:16:04 -- common/autotest_common.sh@1142 -- # return 0 00:09:18.289 18:16:04 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.289 18:16:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:18.289 18:16:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.289 18:16:04 -- common/autotest_common.sh@10 -- # set +x 00:09:18.289 ************************************ 00:09:18.289 START TEST nvme 00:09:18.289 ************************************ 00:09:18.289 18:16:04 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:18.289 * Looking for test storage... 00:09:18.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.289 18:16:04 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:18.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:19.425 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.425 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.425 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.425 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:19.425 18:16:05 nvme -- nvme/nvme.sh@79 -- # uname 00:09:19.425 18:16:05 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:19.425 18:16:05 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:19.425 18:16:05 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:09:19.425 Waiting for stub to ready for secondary processes... 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1069 -- # stubpid=80181 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/80181 ]] 00:09:19.425 18:16:05 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:09:19.684 [2024-07-11 18:16:05.865298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:09:19.684 [2024-07-11 18:16:05.865494] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:20.252 [2024-07-11 18:16:06.646203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.512 [2024-07-11 18:16:06.677670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.512 [2024-07-11 18:16:06.677758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.512 [2024-07-11 18:16:06.677874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.512 [2024-07-11 18:16:06.695118] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:20.512 [2024-07-11 18:16:06.695172] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.512 [2024-07-11 18:16:06.707250] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:20.512 [2024-07-11 18:16:06.707577] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:20.512 [2024-07-11 18:16:06.708295] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.512 [2024-07-11 18:16:06.708588] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:20.512 [2024-07-11 18:16:06.708703] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:20.512 [2024-07-11 18:16:06.709364] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.512 [2024-07-11 18:16:06.709593] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:20.512 [2024-07-11 18:16:06.709688] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:20.512 [2024-07-11 18:16:06.711440] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:20.512 [2024-07-11 18:16:06.711665] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:20.512 [2024-07-11 18:16:06.711770] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:20.512 [2024-07-11 18:16:06.711899] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:20.512 [2024-07-11 18:16:06.712033] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:20.512 done. 00:09:20.512 18:16:06 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:20.512 18:16:06 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:09:20.512 18:16:06 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:20.512 18:16:06 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:20.512 18:16:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.512 18:16:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.512 ************************************ 00:09:20.512 START TEST nvme_reset 00:09:20.512 ************************************ 00:09:20.512 18:16:06 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:20.771 Initializing NVMe Controllers 00:09:20.771 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:20.771 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:20.771 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:20.771 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:20.771 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:20.771 ************************************ 00:09:20.771 END TEST nvme_reset 00:09:20.771 ************************************ 00:09:20.771 00:09:20.771 real 0m0.258s 00:09:20.771 user 0m0.089s 00:09:20.771 sys 0m0.115s 00:09:20.771 18:16:07 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.771 18:16:07 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:20.771 18:16:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:20.771 18:16:07 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:20.771 18:16:07 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.771 18:16:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.771 18:16:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.771 ************************************ 00:09:20.771 START TEST nvme_identify 00:09:20.771 ************************************ 00:09:20.771 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:09:20.771 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:20.771 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:20.771 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:20.771 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:20.771 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:20.771 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:09:20.771 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:20.771 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:20.771 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.033 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:21.033 18:16:07 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.033 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:21.033 ===================================================== 00:09:21.033 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.033 ===================================================== 00:09:21.033 Controller Capabilities/Features 00:09:21.033 ================================ 00:09:21.033 Vendor ID: 1b36 00:09:21.033 Subsystem Vendor ID: 1af4 00:09:21.033 Serial Number: 12343 00:09:21.033 Model Number: QEMU NVMe Ctrl 00:09:21.033 Firmware Version: 8.0.0 00:09:21.033 Recommended Arb Burst: 6 00:09:21.033 IEEE OUI Identifier: 00 54 52 00:09:21.033 Multi-path I/O 00:09:21.033 May have multiple subsystem ports: No 00:09:21.033 May have multiple controllers: Yes 00:09:21.033 Associated with SR-IOV VF: No 00:09:21.033 Max Data Transfer Size: 524288 00:09:21.033 Max Number of Namespaces: 256 00:09:21.033 Max Number of I/O Queues: 64 00:09:21.033 NVMe Specification Version (VS): 1.4 00:09:21.033 NVMe Specification Version (Identify): 1.4 00:09:21.033 Maximum Queue Entries: 2048 00:09:21.033 Contiguous Queues Required: Yes 00:09:21.033 Arbitration Mechanisms Supported 00:09:21.033 Weighted Round Robin: Not Supported 00:09:21.033 Vendor Specific: Not Supported 00:09:21.033 Reset Timeout: 7500 ms 00:09:21.033 Doorbell Stride: 4 bytes 00:09:21.033 NVM Subsystem Reset: Not Supported 00:09:21.033 Command Sets Supported 00:09:21.033 NVM Command Set: Supported 00:09:21.033 Boot Partition: Not Supported 00:09:21.033 Memory Page Size Minimum: 4096 bytes 00:09:21.033 Memory Page Size Maximum: 65536 bytes 00:09:21.033 Persistent Memory Region: Not Supported 00:09:21.033 Optional Asynchronous Events Supported 00:09:21.033 Namespace Attribute Notices: Supported 00:09:21.033 Firmware Activation Notices: Not Supported 00:09:21.033 ANA Change Notices: Not Supported 00:09:21.033 PLE Aggregate Log Change Notices: Not Supported 00:09:21.033 LBA Status Info Alert Notices: Not Supported 00:09:21.033 EGE Aggregate Log Change Notices: Not Supported 00:09:21.033 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.033 Zone Descriptor Change Notices: Not Supported 00:09:21.033 Discovery Log Change Notices: Not Supported 00:09:21.033 Controller Attributes 00:09:21.033 128-bit Host Identifier: Not Supported 00:09:21.033 Non-Operational Permissive Mode: Not Supported 00:09:21.033 NVM Sets: Not Supported 00:09:21.033 Read Recovery Levels: Not Supported 00:09:21.033 Endurance Groups: Supported 00:09:21.034 Predictable Latency Mode: Not Supported 00:09:21.034 Traffic Based Keep ALive: Not Supported 00:09:21.034 Namespace Granularity: Not Supported 00:09:21.034 SQ Associations: Not Supported 00:09:21.034 UUID List: Not Supported 00:09:21.034 Multi-Domain Subsystem: Not Supported 00:09:21.034 Fixed Capacity Management: Not Supported 00:09:21.034 Variable Capacity Management: Not Supported 00:09:21.034 Delete Endurance Group: Not Supported 00:09:21.034 Delete NVM Set: Not Supported 00:09:21.034 Extended LBA Formats Supported: Supported 00:09:21.034 Flexible Data Placement Supported: Supported 00:09:21.034 00:09:21.034 Controller Memory Buffer Support 00:09:21.034 ================================ 00:09:21.034 Supported: No 00:09:21.034 00:09:21.034 Persistent Memory Region Support 00:09:21.034 ================================ 00:09:21.034 Supported: No 00:09:21.034 00:09:21.034 Admin Command Set Attributes 00:09:21.034 ============================ 00:09:21.034 Security Send/Receive: Not Supported 00:09:21.034 Format NVM: Supported 00:09:21.034 Firmware Activate/Download: Not Supported 00:09:21.034 Namespace Management: Supported 00:09:21.034 Device Self-Test: Not Supported 00:09:21.034 Directives: Supported 00:09:21.034 NVMe-MI: Not Supported 00:09:21.034 Virtualization Management: Not Supported 00:09:21.034 Doorbell Buffer Config: Supported 00:09:21.034 Get LBA Status Capability: Not Supported 00:09:21.034 Command & Feature Lockdown Capability: Not Supported 00:09:21.034 Abort Command Limit: 4 00:09:21.034 Async Event Request Limit: 4 00:09:21.034 Number of Firmware Slots: N/A 00:09:21.034 Firmware Slot 1 Read-Only: N/A 00:09:21.034 Firmware Activation Without Reset: N/A 00:09:21.034 Multiple Update Detection Support: N/A 00:09:21.034 Firmware Update Granularity: No Information Provided 00:09:21.034 Per-Namespace SMART Log: Yes 00:09:21.034 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.034 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:21.034 Command Effects Log Page: Supported 00:09:21.034 Get Log Page Extended Data: Supported 00:09:21.034 Telemetry Log Pages: Not Supported 00:09:21.034 Persistent Event Log Pages: Not Supported 00:09:21.034 Supported Log Pages Log Page: May Support 00:09:21.034 Commands Supported & Effects Log Page: Not Supported 00:09:21.034 Feature Identifiers & Effects Log Page:May Support 00:09:21.034 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.034 Data Area 4 for Telemetry Log: Not Supported 00:09:21.034 Error Log Page Entries Supported: 1 00:09:21.034 Keep Alive: Not Supported 00:09:21.034 00:09:21.034 NVM Command Set Attributes 00:09:21.034 ========================== 00:09:21.034 Submission Queue Entry Size 00:09:21.034 Max: 64 00:09:21.034 Min: 64 00:09:21.034 Completion Queue Entry Size 00:09:21.034 Max: 16 00:09:21.034 Min: 16 00:09:21.034 Number of Namespaces: 256 00:09:21.034 Compare Command: Supported 00:09:21.034 Write Uncorrectable Command: Not Supported 00:09:21.034 Dataset Management Command: Supported 00:09:21.034 Write Zeroes Command: Supported 00:09:21.034 Set Features Save Field: Supported 00:09:21.034 Reservations: Not Supported 00:09:21.034 Timestamp: Supported 00:09:21.034 Copy: Supported 00:09:21.034 Volatile Write Cache: Present 00:09:21.034 Atomic Write Unit (Normal): 1 00:09:21.034 Atomic Write Unit (PFail): 1 00:09:21.034 Atomic Compare & Write Unit: 1 00:09:21.034 Fused Compare & Write: Not Supported 00:09:21.034 Scatter-Gather List 00:09:21.034 SGL Command Set: Supported 00:09:21.034 SGL Keyed: Not Supported 00:09:21.034 SGL Bit Bucket Descriptor: Not Supported 00:09:21.034 SGL Metadata Pointer: Not Supported 00:09:21.034 Oversized SGL: Not Supported 00:09:21.034 SGL Metadata Address: Not Supported 00:09:21.034 SGL Offset: Not Supported 00:09:21.034 Transport SGL Data Block: Not Supported 00:09:21.034 Replay Protected Memory Block: Not Supported 00:09:21.034 00:09:21.034 Firmware Slot Information 00:09:21.034 ========================= 00:09:21.034 Active slot: 1 00:09:21.034 Slot 1 Firmware Revision: 1.0 00:09:21.034 00:09:21.034 00:09:21.034 Commands Supported and Effects 00:09:21.034 ============================== 00:09:21.034 Admin Commands 00:09:21.034 -------------- 00:09:21.034 Delete I/O Submission Queue (00h): Supported 00:09:21.034 Create I/O Submission Queue (01h): Supported 00:09:21.034 Get Log Page (02h): Supported 00:09:21.034 Delete I/O Completion Queue (04h): Supported 00:09:21.034 Create I/O Completion Queue (05h): Supported 00:09:21.034 Identify (06h): Supported 00:09:21.034 Abort (08h): Supported 00:09:21.034 Set Features (09h): Supported 00:09:21.034 Get Features (0Ah): Supported 00:09:21.034 Asynchronous Event Request (0Ch): Supported 00:09:21.034 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.034 Directive Send (19h): Supported 00:09:21.034 Directive Receive (1Ah): Supported 00:09:21.034 Virtualization Management (1Ch): Supported 00:09:21.034 Doorbell Buffer Config (7Ch): Supported 00:09:21.034 Format NVM (80h): Supported LBA-Change 00:09:21.034 I/O Commands 00:09:21.034 ------------ 00:09:21.034 Flush (00h): Supported LBA-Change 00:09:21.034 Write (01h): Supported LBA-Change 00:09:21.034 Read (02h): Supported 00:09:21.034 Compare (05h): Supported 00:09:21.034 Write Zeroes (08h): Supported LBA-Change 00:09:21.034 Dataset Management (09h): Supported LBA-Change 00:09:21.034 Unknown (0Ch): Supported 00:09:21.034 Unknown (12h): Supported 00:09:21.034 Copy (19h): Supported LBA-Change 00:09:21.034 Unknown (1Dh): Supported LBA-Change 00:09:21.034 00:09:21.034 Error Log 00:09:21.034 ========= 00:09:21.034 00:09:21.034 Arbitration 00:09:21.034 =========== 00:09:21.034 Arbitration Burst: no limit 00:09:21.034 00:09:21.034 Power Management 00:09:21.034 ================ 00:09:21.034 Number of Power States: 1 00:09:21.034 Current Power State: Power State #0 00:09:21.034 Power State #0: 00:09:21.034 Max Power: 25.00 W 00:09:21.034 Non-Operational State: Operational 00:09:21.034 Entry Latency: 16 microseconds 00:09:21.034 Exit Latency: 4 microseconds 00:09:21.034 Relative Read Throughput: 0 00:09:21.034 Relative Read Latency: 0 00:09:21.034 Relative Write Throughput: 0 00:09:21.034 Relative Write Latency: 0 00:09:21.034 Idle Power: Not Reported 00:09:21.034 Active Power: Not Reported 00:09:21.034 Non-Operational Permissive Mode: Not Supported 00:09:21.034 00:09:21.034 Health Information 00:09:21.034 ================== 00:09:21.034 Critical Warnings: 00:09:21.034 Available Spare Space: OK 00:09:21.034 Temperature: OK 00:09:21.034 Device Reliability: OK 00:09:21.034 Read Only: No 00:09:21.034 Volatile Memory Backup: OK 00:09:21.034 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.034 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.034 Available Spare: 0% 00:09:21.034 Available Spare Threshold: 0% 00:09:21.034 Life Percentage Used: 0% 00:09:21.034 Data Units Read: 749 00:09:21.034 Data Units Written: 642 00:09:21.034 Host Read Commands: 33480 00:09:21.034 Host Write Commands: 32070 00:09:21.034 Controller Busy Time: 0 minutes 00:09:21.034 Power Cycles: 0 00:09:21.034 Power On Hours: 0 hours 00:09:21.034 Unsafe Shutdowns: 0 00:09:21.034 Unrecoverable Media Errors: 0 00:09:21.034 Lifetime Error Log Entries: 0 00:09:21.034 Warning Temperature Time: 0 minutes 00:09:21.034 Critical Temperature Time: 0 minutes 00:09:21.034 00:09:21.034 Number of Queues 00:09:21.034 ================ 00:09:21.034 Number of I/O Submission Queues: 64 00:09:21.034 Number of I/O Completion Queues: 64 00:09:21.034 00:09:21.034 ZNS Specific Controller Data 00:09:21.034 ============================ 00:09:21.034 Zone Append Size Limit: 0 00:09:21.034 00:09:21.034 00:09:21.034 Active Namespaces 00:09:21.034 ================= 00:09:21.034 Namespace ID:1 00:09:21.034 Error Recovery Timeout: Unlimited 00:09:21.034 Command Set Identifier: NVM (00h) 00:09:21.034 Deallocate: Supported 00:09:21.034 Deallocated/Unwritten Error: Supported 00:09:21.034 Deallocated Read Value: All 0x00 00:09:21.034 Deallocate in Write Zeroes: Not Supported 00:09:21.034 Deallocated Guard Field: 0xFFFF 00:09:21.034 Flush: Supported 00:09:21.034 Reservation: Not Supported 00:09:21.034 Namespace Sharing Capabilities: Multiple Controllers 00:09:21.034 Size (in LBAs): 262144 (1GiB) 00:09:21.034 Capacity (in LBAs): 262144 (1GiB) 00:09:21.034 Utilization (in LBAs): 262144 (1GiB) 00:09:21.034 Thin Provisioning: Not Supported 00:09:21.034 Per-NS Atomic Units: No 00:09:21.034 Maximum Single Source Range Length: 128 00:09:21.034 Maximum Copy Length: 128 00:09:21.034 Maximum Source Range Count: 128 00:09:21.034 NGUID/EUI64 Never Reused: No 00:09:21.034 Namespace Write Protected: No 00:09:21.034 Endurance group ID: 1 00:09:21.034 Number of LBA Formats: 8 00:09:21.034 Current LBA Format: LBA Format #04 00:09:21.034 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.034 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.034 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.035 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.035 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.035 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.035 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.035 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.035 00:09:21.035 Get Feature FDP: 00:09:21.035 ================ 00:09:21.035 Enabled: Yes 00:09:21.035 FDP configuration index: 0 00:09:21.035 00:09:21.035 FDP configurations log page 00:09:21.035 =========================== 00:09:21.035 Number of FDP configurations: 1 00:09:21.035 Version: 0 00:09:21.035 Size: 112 00:09:21.035 FDP Configuration Descriptor: 0 00:09:21.035 Descriptor Size: 96 00:09:21.035 Reclaim Group Identifier format: 2 00:09:21.035 FDP Volatile Write Cache: Not Present 00:09:21.035 FDP Configuration: Valid 00:09:21.035 Vendor Specific Size: 0 00:09:21.035 Number of Reclaim Groups: 2 00:09:21.035 Number of Recalim Unit Handles: 8 00:09:21.035 Max Placement Identifiers: 128 00:09:21.035 Number of Namespaces Suppprted: 256 00:09:21.035 Reclaim unit Nominal Size: 6000000 bytes 00:09:21.035 Estimated Reclaim Unit Time Limit: Not Reported 00:09:21.035 RUH Desc #000: RUH Type: Initially Isolated 00:09:21.035 RUH Desc #001: RUH Type: Initially Isolated 00:09:21.035 RUH Desc #002: RUH Type: Initially Isolated 00:09:21.035 RUH Desc #003: RUH Type: Initially Isolated 00:09:21.035 RUH Desc #004: RUH Type: Initially Isolated 00:09:21.035 RUH Desc #005: RUH Type: Initially Isolated 00:09:21.035 RUH Desc #006: RUH Type: Initially Isolated 00:09:21.035 RUH Desc #007: RUH Type: Initially Isolated 00:09:21.035 00:09:21.035 FDP reclaim unit handle usage log page 00:09:21.035 ====================================== 00:09:21.035 Number of Reclaim Unit Handles: 8 00:09:21.035 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:21.035 RUH Usage Desc #001: RUH Attributes: Unused 00:09:21.035 RUH Usage Desc #002: RUH Attributes: Unused 00:09:21.035 RUH Usage Desc #003: RUH Attributes: Unused 00:09:21.035 RUH Usage Desc #004: RUH Attributes: Unused 00:09:21.035 RUH Usage Desc #005: RUH Attributes: Unused 00:09:21.035 RUH Usage Desc #006: RUH Attributes: Unused 00:09:21.035 RUH Usage Desc #007: RUH Attributes: Unused 00:09:21.035 00:09:21.035 FDP statistics log page 00:09:21.035 ======================= 00:09:21.035 Host bytes with metadata written: 408657920 00:09:21.035 Media bytes with metadata written: 408702976 00:09:21.035 Media bytes erased: 0 00:09:21.035 00:09:21.035 FDP events log page 00:09:21.035 =================== 00:09:21.035 Number of FDP events: 0 00:09:21.035 00:09:21.035 NVM Specific Namespace Data 00:09:21.035 =========================== 00:09:21.035 Logical Block Storage Tag Mask: 0 00:09:21.035 Protection Information Capabilities: 00:09:21.035 16b Guard Protection Information Storage Tag Support: No 00:09:21.035 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.035 Storage Tag Check Read Support: No 00:09:21.035 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.035 ===================================================== 00:09:21.035 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.035 ===================================================== 00:09:21.035 Controller Capabilities/Features 00:09:21.035 ================================ 00:09:21.035 Vendor ID: 1b36 00:09:21.035 Subsystem Vendor ID: 1af4 00:09:21.035 Serial Number: 12340 00:09:21.035 Model Number: QEMU NVMe Ctrl 00:09:21.035 Firmware Version: 8.0.0 00:09:21.035 Recommended Arb Burst: 6 00:09:21.035 IEEE OUI Identifier: 00 54 52 00:09:21.035 Multi-path I/O 00:09:21.035 May have multiple subsystem ports: No 00:09:21.035 May have multiple controllers: No 00:09:21.035 Associated with SR-IOV VF: No 00:09:21.035 Max Data Transfer Size: 524288 00:09:21.035 Max Number of Namespaces: 256 00:09:21.035 Max Number of I/O Queues: 64 00:09:21.035 NVMe Specification Version (VS): 1.4 00:09:21.035 NVMe Specification Version (Identify): 1.4 00:09:21.035 Maximum Queue Entries: 2048 00:09:21.035 Contiguous Queues Required: Yes 00:09:21.035 Arbitration Mechanisms Supported 00:09:21.035 Weighted Round Robin: Not Supported 00:09:21.035 Vendor Specific: Not Supported 00:09:21.035 Reset Timeout: 7500 ms 00:09:21.035 Doorbell Stride: 4 bytes 00:09:21.035 NVM Subsystem Reset: Not Supported 00:09:21.035 Command Sets Supported 00:09:21.035 NVM Command Set: Supported 00:09:21.035 Boot Partition: Not Supported 00:09:21.035 Memory Page Size Minimum: 4096 bytes 00:09:21.035 Memory Page Size Maximum: 65536 bytes 00:09:21.035 Persistent Memory Region: Not Supported 00:09:21.035 Optional Asynchronous Events Supported 00:09:21.035 Namespace Attribute Notices: Supported 00:09:21.035 Firmware Activation Notices: Not Supported 00:09:21.035 ANA Change Notices: Not Supported 00:09:21.035 PLE Aggregate Log Change Notices: Not Supported 00:09:21.035 LBA Status Info Alert Notices: Not Supported 00:09:21.035 EGE Aggregate Log Change Notices: Not Supported 00:09:21.035 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.035 Zone Descriptor Change Notices: Not Supported 00:09:21.035 Discovery Log Change Notices: Not Supported 00:09:21.035 Controller Attributes 00:09:21.035 128-bit Host Identifier: Not Supported 00:09:21.035 Non-Operational Permissive Mode: Not Supported 00:09:21.035 NVM Sets: Not Supported 00:09:21.035 Read Recovery Levels: Not Supported 00:09:21.035 Endurance Groups: Not Supported 00:09:21.035 Predictable Latency Mode: Not Supported 00:09:21.035 Traffic Based Keep ALive: Not Supported 00:09:21.035 Namespace Granularity: Not Supported 00:09:21.035 SQ Associations: Not Supported 00:09:21.035 UUID List: Not Supported 00:09:21.035 Multi-Domain Subsystem: Not Supported 00:09:21.035 Fixed Capacity Management: Not Supported 00:09:21.035 Variable Capacity Management: Not Supported 00:09:21.035 Delete Endurance Group: Not Supported 00:09:21.035 Delete NVM Set: Not Supported 00:09:21.035 Extended LBA Formats Supported: Supported 00:09:21.035 Flexible Data Placement Supported: Not Supported 00:09:21.035 00:09:21.035 Controller Memory Buffer Support 00:09:21.035 ================================ 00:09:21.035 Supported: No 00:09:21.035 00:09:21.035 Persistent Memory Region Support 00:09:21.035 ================================ 00:09:21.035 Supported: No 00:09:21.035 00:09:21.035 Admin Command Set Attributes 00:09:21.035 ============================ 00:09:21.035 Security Send/Receive: Not Supported 00:09:21.035 Format NVM: Supported 00:09:21.035 Firmware Activate/Download: Not Supported 00:09:21.035 Namespace Management: Supported 00:09:21.035 Device Self-Test: Not Supported 00:09:21.035 Directives: Supported 00:09:21.035 NVMe-MI: Not Supported 00:09:21.035 Virtualization Management: Not Supported 00:09:21.035 Doorbell Buffer Config: Supported 00:09:21.035 Get LBA Status Capability: Not Supported 00:09:21.035 Command & Feature Lockdown Capability: Not Supported 00:09:21.035 Abort Command Limit: 4 00:09:21.035 Async Event Request Limit: 4 00:09:21.035 Number of Firmware Slots: N/A 00:09:21.035 Firmware Slot 1 Read-Only: N/A 00:09:21.035 Firmware Activation Without Reset: N/A 00:09:21.035 Multiple Update Detection Support: N/A 00:09:21.035 Firmware Update Granularity: No Information Provided 00:09:21.035 Per-Namespace SMART Log: Yes 00:09:21.035 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.035 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:21.035 Command Effects Log Page: Supported 00:09:21.035 Get Log Page Extended Data: Supported 00:09:21.035 Telemetry Log Pages: Not Supported 00:09:21.035 Persistent Event Log Pages: Not Supported 00:09:21.035 Supported Log Pages Log Page: May Support 00:09:21.035 Commands Supported & Effects Log Page: Not Supported 00:09:21.035 Feature Identifiers & Effects Log Page:May Support 00:09:21.035 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.035 Data Area 4 for Telemetry Log: Not Supported 00:09:21.035 Error Log Page Entries Supported: 1 00:09:21.035 Keep Alive: Not Supported 00:09:21.035 00:09:21.035 NVM Command Set Attributes 00:09:21.035 ========================== 00:09:21.035 Submission Queue Entry Size 00:09:21.035 Max: 64 00:09:21.035 Min: 64 00:09:21.035 Completion Queue Entry Size 00:09:21.035 Max: 16 00:09:21.035 Min: 16 00:09:21.035 Number of Namespaces: 256 00:09:21.035 Compare Command: Supported 00:09:21.035 Write Uncorrectable Command: Not Supported 00:09:21.035 Dataset Management Command: Supported 00:09:21.035 Write Zeroes Command: Supported 00:09:21.035 Set Features Save Field: Supported 00:09:21.035 Reservations: Not Supported 00:09:21.035 Timestamp: Supported 00:09:21.035 Copy: Supported 00:09:21.035 Volatile Write Cache: Present 00:09:21.035 Atomic Write Unit (Normal): 1 00:09:21.035 Atomic Write Unit (PFail): 1 00:09:21.035 Atomic Compare & Write Unit: 1 00:09:21.035 Fused Compare & Write: Not Supported 00:09:21.036 Scatter-Gather List 00:09:21.036 SGL Command Set: Supported 00:09:21.036 SGL Keyed: Not Supported 00:09:21.036 SGL Bit Bucket Descriptor: Not Supported 00:09:21.036 SGL Metadata Pointer: Not Supported 00:09:21.036 Oversized SGL: Not Supported 00:09:21.036 SGL Metadata Address: Not Supported 00:09:21.036 SGL Offset: Not Supported 00:09:21.036 Transport SGL Data Block: Not Supported 00:09:21.036 Replay Protected Memory Block: Not Supported 00:09:21.036 00:09:21.036 Firmware Slot Information 00:09:21.036 ========================= 00:09:21.036 Active slot: 1 00:09:21.036 Slot 1 Firmware Revision: 1.0 00:09:21.036 00:09:21.036 00:09:21.036 Commands Supported and Effects 00:09:21.036 ============================== 00:09:21.036 Admin Commands 00:09:21.036 -------------- 00:09:21.036 Delete I/O Submission Queue (00h): Supported 00:09:21.036 Create I/O Submission Queue (01h): Supported 00:09:21.036 Get Log Page (02h): Supported 00:09:21.036 Delete I/O Completion Queue (04h): Supported 00:09:21.036 Create I/O Completion Queue (05h): Supported 00:09:21.036 Identify (06h): Supported 00:09:21.036 Abort (08h): Supported 00:09:21.036 Set Features (09h): Supported 00:09:21.036 Get Features (0Ah): Supported 00:09:21.036 Asynchronous Event Request (0Ch): Supported 00:09:21.036 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.036 Directive Send (19h): Supported 00:09:21.036 Directive Receive (1Ah): Supported 00:09:21.036 Virtualization Management (1Ch): Supported 00:09:21.036 Doorbell Buffer Config (7Ch): Supported 00:09:21.036 Format NVM (80h): Supported LBA-Change 00:09:21.036 I/O Commands 00:09:21.036 ------------ 00:09:21.036 Flush (00h): Supported LBA-Change 00:09:21.036 Write (01h): Supported LBA-Change 00:09:21.036 Read (02h): Supported 00:09:21.036 Compare (05h): Supported 00:09:21.036 Write Zeroes (08h): Supported LBA-Change 00:09:21.036 Dataset Management (09h): Supported LBA-Change 00:09:21.036 Unknown (0Ch): Supported 00:09:21.036 Unknown (12h): Supported 00:09:21.036 Copy (19h): Supported LBA-Change 00:09:21.036 Unknown (1Dh): Supported LBA-Change 00:09:21.036 00:09:21.036 Error Log 00:09:21.036 ========= 00:09:21.036 00:09:21.036 Arbitration 00:09:21.036 =========== 00:09:21.036 Arbitration Burst: no limit 00:09:21.036 00:09:21.036 Power Management 00:09:21.036 ================ 00:09:21.036 Number of Power States: 1 00:09:21.036 Current Power State: Power State #0 00:09:21.036 Power State #0: 00:09:21.036 Max Power: 25.00 W 00:09:21.036 Non-Operational State: Operational 00:09:21.036 Entry Latency: 16 microseconds 00:09:21.036 Exit Latency: 4 microseconds 00:09:21.036 Relative Read Throughput: 0 00:09:21.036 Relative Read Latency: 0 00:09:21.036 Relative Write Throughput: 0 00:09:21.036 Relative Write Latency: 0 00:09:21.036 Idle Power: Not Reported 00:09:21.036 Active Power: Not Reported 00:09:21.036 Non-Operational Permissive Mode: Not Supported 00:09:21.036 00:09:21.036 Health Information 00:09:21.036 ================== 00:09:21.036 Critical Warnings: 00:09:21.036 Available Spare Space: OK 00:09:21.036 Temperature: OK 00:09:21.036 Device Reliability: OK 00:09:21.036 Read Only: No 00:09:21.036 Volatile Memory Backup: OK 00:09:21.036 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.036 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.036 Available Spare: 0% 00:09:21.036 Available Spare Threshold: 0% 00:09:21.036 Life Percentage Used: 0% 00:09:21.036 Data Units Read: 969 00:09:21.036 Data Units Written: 808 00:09:21.036 Host Read Commands: 46897 00:09:21.036 Host Write Commands: 45499 00:09:21.036 Controller Busy Time: 0 minutes 00:09:21.036 Power Cycles: 0 00:09:21.036 Power On Hours: 0 hours 00:09:21.036 Unsafe Shutdowns: 0 00:09:21.036 Unrecoverable Media Errors: 0 00:09:21.036 Lifetime Error Log Entries: 0 00:09:21.036 Warning Temperature Time: 0 minutes 00:09:21.036 Critical Temperature Time: 0 minutes 00:09:21.036 00:09:21.036 Number of Queues 00:09:21.036 ================ 00:09:21.036 Number of I/O Submission Queues: 64 00:09:21.036 Number of I/O Completion Queues: 64 00:09:21.036 00:09:21.036 ZNS Specific Controller Data 00:09:21.036 ============================ 00:09:21.036 Zone Append Size Limit: 0 00:09:21.036 00:09:21.036 00:09:21.036 Active Namespaces 00:09:21.036 ================= 00:09:21.036 Namespace ID:1 00:09:21.036 Error Recovery Timeout: Unlimited 00:09:21.036 Command Set Identifier: NVM (00h) 00:09:21.036 Deallocate: Supported 00:09:21.036 Deallocated/Unwritten Error: Supported 00:09:21.036 Deallocated Read Value: All 0x00 00:09:21.036 Deallocate in Write Zeroes: Not Supported 00:09:21.036 Deallocated Guard Field: 0xFFFF 00:09:21.036 Flush: Supported 00:09:21.036 Reservation: Not Supported 00:09:21.036 Metadata Transferred as: Separate Metadata Buffer 00:09:21.036 Namespace Sharing Capabilities: Private 00:09:21.036 Size (in LBAs): 1548666 (5GiB) 00:09:21.036 Capacity (in LBAs): 1548666 (5GiB) 00:09:21.036 Utilization (in LBAs): 1548666 (5GiB) 00:09:21.036 Thin Provisioning: Not Supported 00:09:21.036 Per-NS Atomic Units: No 00:09:21.036 Maximum Single Source Range Length: 128 00:09:21.036 Maximum Copy Length: 128 00:09:21.036 Maximum Source Range Count: 128 00:09:21.036 NGUID/EUI64 Never Reused: No 00:09:21.036 Namespace Write Protected: No 00:09:21.036 Number of LBA Formats: 8 00:09:21.036 Current LBA Format: [2024-07-11 18:16:07.410349] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80202 terminated unexpected 00:09:21.036 [2024-07-11 18:16:07.412304] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80202 terminated unexpected 00:09:21.036 [2024-07-11 18:16:07.413545] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80202 terminated unexpected 00:09:21.036 LBA Format #07 00:09:21.036 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.036 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.036 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.036 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.036 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.036 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.036 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.036 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.036 00:09:21.036 NVM Specific Namespace Data 00:09:21.036 =========================== 00:09:21.036 Logical Block Storage Tag Mask: 0 00:09:21.036 Protection Information Capabilities: 00:09:21.036 16b Guard Protection Information Storage Tag Support: No 00:09:21.036 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.036 Storage Tag Check Read Support: No 00:09:21.036 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.036 ===================================================== 00:09:21.036 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.036 ===================================================== 00:09:21.036 Controller Capabilities/Features 00:09:21.036 ================================ 00:09:21.036 Vendor ID: 1b36 00:09:21.036 Subsystem Vendor ID: 1af4 00:09:21.036 Serial Number: 12341 00:09:21.036 Model Number: QEMU NVMe Ctrl 00:09:21.036 Firmware Version: 8.0.0 00:09:21.036 Recommended Arb Burst: 6 00:09:21.036 IEEE OUI Identifier: 00 54 52 00:09:21.036 Multi-path I/O 00:09:21.036 May have multiple subsystem ports: No 00:09:21.036 May have multiple controllers: No 00:09:21.036 Associated with SR-IOV VF: No 00:09:21.036 Max Data Transfer Size: 524288 00:09:21.036 Max Number of Namespaces: 256 00:09:21.036 Max Number of I/O Queues: 64 00:09:21.036 NVMe Specification Version (VS): 1.4 00:09:21.036 NVMe Specification Version (Identify): 1.4 00:09:21.036 Maximum Queue Entries: 2048 00:09:21.036 Contiguous Queues Required: Yes 00:09:21.036 Arbitration Mechanisms Supported 00:09:21.036 Weighted Round Robin: Not Supported 00:09:21.036 Vendor Specific: Not Supported 00:09:21.036 Reset Timeout: 7500 ms 00:09:21.036 Doorbell Stride: 4 bytes 00:09:21.036 NVM Subsystem Reset: Not Supported 00:09:21.036 Command Sets Supported 00:09:21.036 NVM Command Set: Supported 00:09:21.036 Boot Partition: Not Supported 00:09:21.036 Memory Page Size Minimum: 4096 bytes 00:09:21.036 Memory Page Size Maximum: 65536 bytes 00:09:21.036 Persistent Memory Region: Not Supported 00:09:21.036 Optional Asynchronous Events Supported 00:09:21.036 Namespace Attribute Notices: Supported 00:09:21.036 Firmware Activation Notices: Not Supported 00:09:21.036 ANA Change Notices: Not Supported 00:09:21.036 PLE Aggregate Log Change Notices: Not Supported 00:09:21.036 LBA Status Info Alert Notices: Not Supported 00:09:21.037 EGE Aggregate Log Change Notices: Not Supported 00:09:21.037 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.037 Zone Descriptor Change Notices: Not Supported 00:09:21.037 Discovery Log Change Notices: Not Supported 00:09:21.037 Controller Attributes 00:09:21.037 128-bit Host Identifier: Not Supported 00:09:21.037 Non-Operational Permissive Mode: Not Supported 00:09:21.037 NVM Sets: Not Supported 00:09:21.037 Read Recovery Levels: Not Supported 00:09:21.037 Endurance Groups: Not Supported 00:09:21.037 Predictable Latency Mode: Not Supported 00:09:21.037 Traffic Based Keep ALive: Not Supported 00:09:21.037 Namespace Granularity: Not Supported 00:09:21.037 SQ Associations: Not Supported 00:09:21.037 UUID List: Not Supported 00:09:21.037 Multi-Domain Subsystem: Not Supported 00:09:21.037 Fixed Capacity Management: Not Supported 00:09:21.037 Variable Capacity Management: Not Supported 00:09:21.037 Delete Endurance Group: Not Supported 00:09:21.037 Delete NVM Set: Not Supported 00:09:21.037 Extended LBA Formats Supported: Supported 00:09:21.037 Flexible Data Placement Supported: Not Supported 00:09:21.037 00:09:21.037 Controller Memory Buffer Support 00:09:21.037 ================================ 00:09:21.037 Supported: No 00:09:21.037 00:09:21.037 Persistent Memory Region Support 00:09:21.037 ================================ 00:09:21.037 Supported: No 00:09:21.037 00:09:21.037 Admin Command Set Attributes 00:09:21.037 ============================ 00:09:21.037 Security Send/Receive: Not Supported 00:09:21.037 Format NVM: Supported 00:09:21.037 Firmware Activate/Download: Not Supported 00:09:21.037 Namespace Management: Supported 00:09:21.037 Device Self-Test: Not Supported 00:09:21.037 Directives: Supported 00:09:21.037 NVMe-MI: Not Supported 00:09:21.037 Virtualization Management: Not Supported 00:09:21.037 Doorbell Buffer Config: Supported 00:09:21.037 Get LBA Status Capability: Not Supported 00:09:21.037 Command & Feature Lockdown Capability: Not Supported 00:09:21.037 Abort Command Limit: 4 00:09:21.037 Async Event Request Limit: 4 00:09:21.037 Number of Firmware Slots: N/A 00:09:21.037 Firmware Slot 1 Read-Only: N/A 00:09:21.037 Firmware Activation Without Reset: N/A 00:09:21.037 Multiple Update Detection Support: N/A 00:09:21.037 Firmware Update Granularity: No Information Provided 00:09:21.037 Per-Namespace SMART Log: Yes 00:09:21.037 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.037 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:21.037 Command Effects Log Page: Supported 00:09:21.037 Get Log Page Extended Data: Supported 00:09:21.037 Telemetry Log Pages: Not Supported 00:09:21.037 Persistent Event Log Pages: Not Supported 00:09:21.037 Supported Log Pages Log Page: May Support 00:09:21.037 Commands Supported & Effects Log Page: Not Supported 00:09:21.037 Feature Identifiers & Effects Log Page:May Support 00:09:21.037 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.037 Data Area 4 for Telemetry Log: Not Supported 00:09:21.037 Error Log Page Entries Supported: 1 00:09:21.037 Keep Alive: Not Supported 00:09:21.037 00:09:21.037 NVM Command Set Attributes 00:09:21.037 ========================== 00:09:21.037 Submission Queue Entry Size 00:09:21.037 Max: 64 00:09:21.037 Min: 64 00:09:21.037 Completion Queue Entry Size 00:09:21.037 Max: 16 00:09:21.037 Min: 16 00:09:21.037 Number of Namespaces: 256 00:09:21.037 Compare Command: Supported 00:09:21.037 Write Uncorrectable Command: Not Supported 00:09:21.037 Dataset Management Command: Supported 00:09:21.037 Write Zeroes Command: Supported 00:09:21.037 Set Features Save Field: Supported 00:09:21.037 Reservations: Not Supported 00:09:21.037 Timestamp: Supported 00:09:21.037 Copy: Supported 00:09:21.037 Volatile Write Cache: Present 00:09:21.037 Atomic Write Unit (Normal): 1 00:09:21.037 Atomic Write Unit (PFail): 1 00:09:21.037 Atomic Compare & Write Unit: 1 00:09:21.037 Fused Compare & Write: Not Supported 00:09:21.037 Scatter-Gather List 00:09:21.037 SGL Command Set: Supported 00:09:21.037 SGL Keyed: Not Supported 00:09:21.037 SGL Bit Bucket Descriptor: Not Supported 00:09:21.037 SGL Metadata Pointer: Not Supported 00:09:21.037 Oversized SGL: Not Supported 00:09:21.037 SGL Metadata Address: Not Supported 00:09:21.037 SGL Offset: Not Supported 00:09:21.037 Transport SGL Data Block: Not Supported 00:09:21.037 Replay Protected Memory Block: Not Supported 00:09:21.037 00:09:21.037 Firmware Slot Information 00:09:21.037 ========================= 00:09:21.037 Active slot: 1 00:09:21.037 Slot 1 Firmware Revision: 1.0 00:09:21.037 00:09:21.037 00:09:21.037 Commands Supported and Effects 00:09:21.037 ============================== 00:09:21.037 Admin Commands 00:09:21.037 -------------- 00:09:21.037 Delete I/O Submission Queue (00h): Supported 00:09:21.037 Create I/O Submission Queue (01h): Supported 00:09:21.037 Get Log Page (02h): Supported 00:09:21.037 Delete I/O Completion Queue (04h): Supported 00:09:21.037 Create I/O Completion Queue (05h): Supported 00:09:21.037 Identify (06h): Supported 00:09:21.037 Abort (08h): Supported 00:09:21.037 Set Features (09h): Supported 00:09:21.037 Get Features (0Ah): Supported 00:09:21.037 Asynchronous Event Request (0Ch): Supported 00:09:21.037 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.037 Directive Send (19h): Supported 00:09:21.037 Directive Receive (1Ah): Supported 00:09:21.037 Virtualization Management (1Ch): Supported 00:09:21.037 Doorbell Buffer Config (7Ch): Supported 00:09:21.037 Format NVM (80h): Supported LBA-Change 00:09:21.037 I/O Commands 00:09:21.037 ------------ 00:09:21.037 Flush (00h): Supported LBA-Change 00:09:21.037 Write (01h): Supported LBA-Change 00:09:21.037 Read (02h): Supported 00:09:21.037 Compare (05h): Supported 00:09:21.037 Write Zeroes (08h): Supported LBA-Change 00:09:21.037 Dataset Management (09h): Supported LBA-Change 00:09:21.037 Unknown (0Ch): Supported 00:09:21.037 Unknown (12h): Supported 00:09:21.037 Copy (19h): Supported LBA-Change 00:09:21.037 Unknown (1Dh): Supported LBA-Change 00:09:21.037 00:09:21.037 Error Log 00:09:21.037 ========= 00:09:21.037 00:09:21.037 Arbitration 00:09:21.037 =========== 00:09:21.037 Arbitration Burst: no limit 00:09:21.037 00:09:21.037 Power Management 00:09:21.037 ================ 00:09:21.037 Number of Power States: 1 00:09:21.037 Current Power State: Power State #0 00:09:21.037 Power State #0: 00:09:21.037 Max Power: 25.00 W 00:09:21.037 Non-Operational State: Operational 00:09:21.037 Entry Latency: 16 microseconds 00:09:21.037 Exit Latency: 4 microseconds 00:09:21.037 Relative Read Throughput: 0 00:09:21.037 Relative Read Latency: 0 00:09:21.037 Relative Write Throughput: 0 00:09:21.037 Relative Write Latency: 0 00:09:21.037 Idle Power: Not Reported 00:09:21.037 Active Power: Not Reported 00:09:21.037 Non-Operational Permissive Mode: Not Supported 00:09:21.037 00:09:21.037 Health Information 00:09:21.037 ================== 00:09:21.037 Critical Warnings: 00:09:21.037 Available Spare Space: OK 00:09:21.037 Temperature: OK 00:09:21.037 Device Reliability: OK 00:09:21.037 Read Only: No 00:09:21.037 Volatile Memory Backup: OK 00:09:21.037 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.037 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.037 Available Spare: 0% 00:09:21.037 Available Spare Threshold: 0% 00:09:21.037 Life Percentage Used: 0% 00:09:21.037 Data Units Read: 696 00:09:21.037 Data Units Written: 547 00:09:21.037 Host Read Commands: 33371 00:09:21.037 Host Write Commands: 31145 00:09:21.037 Controller Busy Time: 0 minutes 00:09:21.037 Power Cycles: 0 00:09:21.037 Power On Hours: 0 hours 00:09:21.037 Unsafe Shutdowns: 0 00:09:21.037 Unrecoverable Media Errors: 0 00:09:21.037 Lifetime Error Log Entries: 0 00:09:21.037 Warning Temperature Time: 0 minutes 00:09:21.037 Critical Temperature Time: 0 minutes 00:09:21.037 00:09:21.037 Number of Queues 00:09:21.037 ================ 00:09:21.037 Number of I/O Submission Queues: 64 00:09:21.037 Number of I/O Completion Queues: 64 00:09:21.037 00:09:21.037 ZNS Specific Controller Data 00:09:21.037 ============================ 00:09:21.037 Zone Append Size Limit: 0 00:09:21.037 00:09:21.037 00:09:21.037 Active Namespaces 00:09:21.037 ================= 00:09:21.037 Namespace ID:1 00:09:21.037 Error Recovery Timeout: Unlimited 00:09:21.037 Command Set Identifier: NVM (00h) 00:09:21.037 Deallocate: Supported 00:09:21.037 Deallocated/Unwritten Error: Supported 00:09:21.037 Deallocated Read Value: All 0x00 00:09:21.037 Deallocate in Write Zeroes: Not Supported 00:09:21.037 Deallocated Guard Field: 0xFFFF 00:09:21.037 Flush: Supported 00:09:21.037 Reservation: Not Supported 00:09:21.037 Namespace Sharing Capabilities: Private 00:09:21.037 Size (in LBAs): 1310720 (5GiB) 00:09:21.037 Capacity (in LBAs): 1310720 (5GiB) 00:09:21.037 Utilization (in LBAs): 1310720 (5GiB) 00:09:21.037 Thin Provisioning: Not Supported 00:09:21.037 Per-NS Atomic Units: No 00:09:21.037 Maximum Single Source Range Length: 128 00:09:21.037 Maximum Copy Length: 128 00:09:21.037 Maximum Source Range Count: 128 00:09:21.038 NGUID/EUI64 Never Reused: No 00:09:21.038 Namespace Write Protected: No 00:09:21.038 Number of LBA Formats: 8 00:09:21.038 Current LBA Format: LBA Format #04 00:09:21.038 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.038 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.038 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.038 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.038 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.038 LBA Format[2024-07-11 18:16:07.415020] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80202 terminated unexpected 00:09:21.038 #05: Data Size: 4096 Metadata Size: 8 00:09:21.038 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.038 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.038 00:09:21.038 NVM Specific Namespace Data 00:09:21.038 =========================== 00:09:21.038 Logical Block Storage Tag Mask: 0 00:09:21.038 Protection Information Capabilities: 00:09:21.038 16b Guard Protection Information Storage Tag Support: No 00:09:21.038 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.038 Storage Tag Check Read Support: No 00:09:21.038 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.038 ===================================================== 00:09:21.038 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.038 ===================================================== 00:09:21.038 Controller Capabilities/Features 00:09:21.038 ================================ 00:09:21.038 Vendor ID: 1b36 00:09:21.038 Subsystem Vendor ID: 1af4 00:09:21.038 Serial Number: 12342 00:09:21.038 Model Number: QEMU NVMe Ctrl 00:09:21.038 Firmware Version: 8.0.0 00:09:21.038 Recommended Arb Burst: 6 00:09:21.038 IEEE OUI Identifier: 00 54 52 00:09:21.038 Multi-path I/O 00:09:21.038 May have multiple subsystem ports: No 00:09:21.038 May have multiple controllers: No 00:09:21.038 Associated with SR-IOV VF: No 00:09:21.038 Max Data Transfer Size: 524288 00:09:21.038 Max Number of Namespaces: 256 00:09:21.038 Max Number of I/O Queues: 64 00:09:21.038 NVMe Specification Version (VS): 1.4 00:09:21.038 NVMe Specification Version (Identify): 1.4 00:09:21.038 Maximum Queue Entries: 2048 00:09:21.038 Contiguous Queues Required: Yes 00:09:21.038 Arbitration Mechanisms Supported 00:09:21.038 Weighted Round Robin: Not Supported 00:09:21.038 Vendor Specific: Not Supported 00:09:21.038 Reset Timeout: 7500 ms 00:09:21.038 Doorbell Stride: 4 bytes 00:09:21.038 NVM Subsystem Reset: Not Supported 00:09:21.038 Command Sets Supported 00:09:21.038 NVM Command Set: Supported 00:09:21.038 Boot Partition: Not Supported 00:09:21.038 Memory Page Size Minimum: 4096 bytes 00:09:21.038 Memory Page Size Maximum: 65536 bytes 00:09:21.038 Persistent Memory Region: Not Supported 00:09:21.038 Optional Asynchronous Events Supported 00:09:21.038 Namespace Attribute Notices: Supported 00:09:21.038 Firmware Activation Notices: Not Supported 00:09:21.038 ANA Change Notices: Not Supported 00:09:21.038 PLE Aggregate Log Change Notices: Not Supported 00:09:21.038 LBA Status Info Alert Notices: Not Supported 00:09:21.038 EGE Aggregate Log Change Notices: Not Supported 00:09:21.038 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.038 Zone Descriptor Change Notices: Not Supported 00:09:21.038 Discovery Log Change Notices: Not Supported 00:09:21.038 Controller Attributes 00:09:21.038 128-bit Host Identifier: Not Supported 00:09:21.038 Non-Operational Permissive Mode: Not Supported 00:09:21.038 NVM Sets: Not Supported 00:09:21.038 Read Recovery Levels: Not Supported 00:09:21.038 Endurance Groups: Not Supported 00:09:21.038 Predictable Latency Mode: Not Supported 00:09:21.038 Traffic Based Keep ALive: Not Supported 00:09:21.038 Namespace Granularity: Not Supported 00:09:21.038 SQ Associations: Not Supported 00:09:21.038 UUID List: Not Supported 00:09:21.038 Multi-Domain Subsystem: Not Supported 00:09:21.038 Fixed Capacity Management: Not Supported 00:09:21.038 Variable Capacity Management: Not Supported 00:09:21.038 Delete Endurance Group: Not Supported 00:09:21.038 Delete NVM Set: Not Supported 00:09:21.038 Extended LBA Formats Supported: Supported 00:09:21.038 Flexible Data Placement Supported: Not Supported 00:09:21.038 00:09:21.038 Controller Memory Buffer Support 00:09:21.038 ================================ 00:09:21.038 Supported: No 00:09:21.038 00:09:21.038 Persistent Memory Region Support 00:09:21.038 ================================ 00:09:21.038 Supported: No 00:09:21.038 00:09:21.038 Admin Command Set Attributes 00:09:21.038 ============================ 00:09:21.038 Security Send/Receive: Not Supported 00:09:21.038 Format NVM: Supported 00:09:21.038 Firmware Activate/Download: Not Supported 00:09:21.038 Namespace Management: Supported 00:09:21.038 Device Self-Test: Not Supported 00:09:21.038 Directives: Supported 00:09:21.038 NVMe-MI: Not Supported 00:09:21.038 Virtualization Management: Not Supported 00:09:21.038 Doorbell Buffer Config: Supported 00:09:21.038 Get LBA Status Capability: Not Supported 00:09:21.038 Command & Feature Lockdown Capability: Not Supported 00:09:21.038 Abort Command Limit: 4 00:09:21.038 Async Event Request Limit: 4 00:09:21.038 Number of Firmware Slots: N/A 00:09:21.038 Firmware Slot 1 Read-Only: N/A 00:09:21.038 Firmware Activation Without Reset: N/A 00:09:21.038 Multiple Update Detection Support: N/A 00:09:21.038 Firmware Update Granularity: No Information Provided 00:09:21.038 Per-Namespace SMART Log: Yes 00:09:21.038 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.038 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:21.038 Command Effects Log Page: Supported 00:09:21.038 Get Log Page Extended Data: Supported 00:09:21.038 Telemetry Log Pages: Not Supported 00:09:21.038 Persistent Event Log Pages: Not Supported 00:09:21.038 Supported Log Pages Log Page: May Support 00:09:21.038 Commands Supported & Effects Log Page: Not Supported 00:09:21.038 Feature Identifiers & Effects Log Page:May Support 00:09:21.038 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.038 Data Area 4 for Telemetry Log: Not Supported 00:09:21.038 Error Log Page Entries Supported: 1 00:09:21.038 Keep Alive: Not Supported 00:09:21.038 00:09:21.038 NVM Command Set Attributes 00:09:21.039 ========================== 00:09:21.039 Submission Queue Entry Size 00:09:21.039 Max: 64 00:09:21.039 Min: 64 00:09:21.039 Completion Queue Entry Size 00:09:21.039 Max: 16 00:09:21.039 Min: 16 00:09:21.039 Number of Namespaces: 256 00:09:21.039 Compare Command: Supported 00:09:21.039 Write Uncorrectable Command: Not Supported 00:09:21.039 Dataset Management Command: Supported 00:09:21.039 Write Zeroes Command: Supported 00:09:21.039 Set Features Save Field: Supported 00:09:21.039 Reservations: Not Supported 00:09:21.039 Timestamp: Supported 00:09:21.039 Copy: Supported 00:09:21.039 Volatile Write Cache: Present 00:09:21.039 Atomic Write Unit (Normal): 1 00:09:21.039 Atomic Write Unit (PFail): 1 00:09:21.039 Atomic Compare & Write Unit: 1 00:09:21.039 Fused Compare & Write: Not Supported 00:09:21.039 Scatter-Gather List 00:09:21.039 SGL Command Set: Supported 00:09:21.039 SGL Keyed: Not Supported 00:09:21.039 SGL Bit Bucket Descriptor: Not Supported 00:09:21.039 SGL Metadata Pointer: Not Supported 00:09:21.039 Oversized SGL: Not Supported 00:09:21.039 SGL Metadata Address: Not Supported 00:09:21.039 SGL Offset: Not Supported 00:09:21.039 Transport SGL Data Block: Not Supported 00:09:21.039 Replay Protected Memory Block: Not Supported 00:09:21.039 00:09:21.039 Firmware Slot Information 00:09:21.039 ========================= 00:09:21.039 Active slot: 1 00:09:21.039 Slot 1 Firmware Revision: 1.0 00:09:21.039 00:09:21.039 00:09:21.039 Commands Supported and Effects 00:09:21.039 ============================== 00:09:21.039 Admin Commands 00:09:21.039 -------------- 00:09:21.039 Delete I/O Submission Queue (00h): Supported 00:09:21.039 Create I/O Submission Queue (01h): Supported 00:09:21.039 Get Log Page (02h): Supported 00:09:21.039 Delete I/O Completion Queue (04h): Supported 00:09:21.039 Create I/O Completion Queue (05h): Supported 00:09:21.039 Identify (06h): Supported 00:09:21.039 Abort (08h): Supported 00:09:21.039 Set Features (09h): Supported 00:09:21.039 Get Features (0Ah): Supported 00:09:21.039 Asynchronous Event Request (0Ch): Supported 00:09:21.039 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.039 Directive Send (19h): Supported 00:09:21.039 Directive Receive (1Ah): Supported 00:09:21.039 Virtualization Management (1Ch): Supported 00:09:21.039 Doorbell Buffer Config (7Ch): Supported 00:09:21.039 Format NVM (80h): Supported LBA-Change 00:09:21.039 I/O Commands 00:09:21.039 ------------ 00:09:21.039 Flush (00h): Supported LBA-Change 00:09:21.039 Write (01h): Supported LBA-Change 00:09:21.039 Read (02h): Supported 00:09:21.039 Compare (05h): Supported 00:09:21.039 Write Zeroes (08h): Supported LBA-Change 00:09:21.039 Dataset Management (09h): Supported LBA-Change 00:09:21.039 Unknown (0Ch): Supported 00:09:21.039 Unknown (12h): Supported 00:09:21.039 Copy (19h): Supported LBA-Change 00:09:21.039 Unknown (1Dh): Supported LBA-Change 00:09:21.039 00:09:21.039 Error Log 00:09:21.039 ========= 00:09:21.039 00:09:21.039 Arbitration 00:09:21.039 =========== 00:09:21.039 Arbitration Burst: no limit 00:09:21.039 00:09:21.039 Power Management 00:09:21.039 ================ 00:09:21.039 Number of Power States: 1 00:09:21.039 Current Power State: Power State #0 00:09:21.039 Power State #0: 00:09:21.039 Max Power: 25.00 W 00:09:21.039 Non-Operational State: Operational 00:09:21.039 Entry Latency: 16 microseconds 00:09:21.039 Exit Latency: 4 microseconds 00:09:21.039 Relative Read Throughput: 0 00:09:21.039 Relative Read Latency: 0 00:09:21.039 Relative Write Throughput: 0 00:09:21.039 Relative Write Latency: 0 00:09:21.039 Idle Power: Not Reported 00:09:21.039 Active Power: Not Reported 00:09:21.039 Non-Operational Permissive Mode: Not Supported 00:09:21.039 00:09:21.039 Health Information 00:09:21.039 ================== 00:09:21.039 Critical Warnings: 00:09:21.039 Available Spare Space: OK 00:09:21.039 Temperature: OK 00:09:21.039 Device Reliability: OK 00:09:21.039 Read Only: No 00:09:21.039 Volatile Memory Backup: OK 00:09:21.039 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.039 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.039 Available Spare: 0% 00:09:21.039 Available Spare Threshold: 0% 00:09:21.039 Life Percentage Used: 0% 00:09:21.039 Data Units Read: 2152 00:09:21.039 Data Units Written: 1832 00:09:21.039 Host Read Commands: 99177 00:09:21.039 Host Write Commands: 94947 00:09:21.039 Controller Busy Time: 0 minutes 00:09:21.039 Power Cycles: 0 00:09:21.039 Power On Hours: 0 hours 00:09:21.039 Unsafe Shutdowns: 0 00:09:21.039 Unrecoverable Media Errors: 0 00:09:21.039 Lifetime Error Log Entries: 0 00:09:21.039 Warning Temperature Time: 0 minutes 00:09:21.039 Critical Temperature Time: 0 minutes 00:09:21.039 00:09:21.039 Number of Queues 00:09:21.039 ================ 00:09:21.039 Number of I/O Submission Queues: 64 00:09:21.039 Number of I/O Completion Queues: 64 00:09:21.039 00:09:21.039 ZNS Specific Controller Data 00:09:21.039 ============================ 00:09:21.039 Zone Append Size Limit: 0 00:09:21.039 00:09:21.039 00:09:21.039 Active Namespaces 00:09:21.039 ================= 00:09:21.039 Namespace ID:1 00:09:21.039 Error Recovery Timeout: Unlimited 00:09:21.039 Command Set Identifier: NVM (00h) 00:09:21.039 Deallocate: Supported 00:09:21.039 Deallocated/Unwritten Error: Supported 00:09:21.039 Deallocated Read Value: All 0x00 00:09:21.039 Deallocate in Write Zeroes: Not Supported 00:09:21.039 Deallocated Guard Field: 0xFFFF 00:09:21.039 Flush: Supported 00:09:21.039 Reservation: Not Supported 00:09:21.039 Namespace Sharing Capabilities: Private 00:09:21.039 Size (in LBAs): 1048576 (4GiB) 00:09:21.039 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.039 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.039 Thin Provisioning: Not Supported 00:09:21.039 Per-NS Atomic Units: No 00:09:21.039 Maximum Single Source Range Length: 128 00:09:21.039 Maximum Copy Length: 128 00:09:21.039 Maximum Source Range Count: 128 00:09:21.039 NGUID/EUI64 Never Reused: No 00:09:21.039 Namespace Write Protected: No 00:09:21.039 Number of LBA Formats: 8 00:09:21.039 Current LBA Format: LBA Format #04 00:09:21.039 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.039 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.039 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.039 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.039 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.039 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.039 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.039 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.039 00:09:21.039 NVM Specific Namespace Data 00:09:21.039 =========================== 00:09:21.039 Logical Block Storage Tag Mask: 0 00:09:21.039 Protection Information Capabilities: 00:09:21.039 16b Guard Protection Information Storage Tag Support: No 00:09:21.039 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.039 Storage Tag Check Read Support: No 00:09:21.039 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.039 Namespace ID:2 00:09:21.039 Error Recovery Timeout: Unlimited 00:09:21.039 Command Set Identifier: NVM (00h) 00:09:21.039 Deallocate: Supported 00:09:21.039 Deallocated/Unwritten Error: Supported 00:09:21.039 Deallocated Read Value: All 0x00 00:09:21.039 Deallocate in Write Zeroes: Not Supported 00:09:21.039 Deallocated Guard Field: 0xFFFF 00:09:21.039 Flush: Supported 00:09:21.039 Reservation: Not Supported 00:09:21.039 Namespace Sharing Capabilities: Private 00:09:21.039 Size (in LBAs): 1048576 (4GiB) 00:09:21.039 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.039 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.039 Thin Provisioning: Not Supported 00:09:21.039 Per-NS Atomic Units: No 00:09:21.039 Maximum Single Source Range Length: 128 00:09:21.039 Maximum Copy Length: 128 00:09:21.039 Maximum Source Range Count: 128 00:09:21.039 NGUID/EUI64 Never Reused: No 00:09:21.039 Namespace Write Protected: No 00:09:21.039 Number of LBA Formats: 8 00:09:21.039 Current LBA Format: LBA Format #04 00:09:21.039 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.039 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.039 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.039 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.039 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.039 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.039 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.040 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.040 00:09:21.040 NVM Specific Namespace Data 00:09:21.040 =========================== 00:09:21.040 Logical Block Storage Tag Mask: 0 00:09:21.040 Protection Information Capabilities: 00:09:21.040 16b Guard Protection Information Storage Tag Support: No 00:09:21.040 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.040 Storage Tag Check Read Support: No 00:09:21.040 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.040 Namespace ID:3 00:09:21.040 Error Recovery Timeout: Unlimited 00:09:21.040 Command Set Identifier: NVM (00h) 00:09:21.040 Deallocate: Supported 00:09:21.040 Deallocated/Unwritten Error: Supported 00:09:21.040 Deallocated Read Value: All 0x00 00:09:21.040 Deallocate in Write Zeroes: Not Supported 00:09:21.040 Deallocated Guard Field: 0xFFFF 00:09:21.040 Flush: Supported 00:09:21.040 Reservation: Not Supported 00:09:21.040 Namespace Sharing Capabilities: Private 00:09:21.040 Size (in LBAs): 1048576 (4GiB) 00:09:21.300 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.300 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.300 Thin Provisioning: Not Supported 00:09:21.300 Per-NS Atomic Units: No 00:09:21.300 Maximum Single Source Range Length: 128 00:09:21.300 Maximum Copy Length: 128 00:09:21.300 Maximum Source Range Count: 128 00:09:21.300 NGUID/EUI64 Never Reused: No 00:09:21.300 Namespace Write Protected: No 00:09:21.300 Number of LBA Formats: 8 00:09:21.300 Current LBA Format: LBA Format #04 00:09:21.300 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.300 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.300 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.300 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.300 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.300 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.300 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.300 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.300 00:09:21.300 NVM Specific Namespace Data 00:09:21.300 =========================== 00:09:21.300 Logical Block Storage Tag Mask: 0 00:09:21.300 Protection Information Capabilities: 00:09:21.300 16b Guard Protection Information Storage Tag Support: No 00:09:21.300 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.300 Storage Tag Check Read Support: No 00:09:21.300 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.300 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:21.300 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:21.300 ===================================================== 00:09:21.300 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.300 ===================================================== 00:09:21.300 Controller Capabilities/Features 00:09:21.300 ================================ 00:09:21.300 Vendor ID: 1b36 00:09:21.300 Subsystem Vendor ID: 1af4 00:09:21.300 Serial Number: 12340 00:09:21.300 Model Number: QEMU NVMe Ctrl 00:09:21.300 Firmware Version: 8.0.0 00:09:21.300 Recommended Arb Burst: 6 00:09:21.300 IEEE OUI Identifier: 00 54 52 00:09:21.300 Multi-path I/O 00:09:21.300 May have multiple subsystem ports: No 00:09:21.300 May have multiple controllers: No 00:09:21.300 Associated with SR-IOV VF: No 00:09:21.300 Max Data Transfer Size: 524288 00:09:21.300 Max Number of Namespaces: 256 00:09:21.300 Max Number of I/O Queues: 64 00:09:21.300 NVMe Specification Version (VS): 1.4 00:09:21.300 NVMe Specification Version (Identify): 1.4 00:09:21.300 Maximum Queue Entries: 2048 00:09:21.300 Contiguous Queues Required: Yes 00:09:21.300 Arbitration Mechanisms Supported 00:09:21.300 Weighted Round Robin: Not Supported 00:09:21.300 Vendor Specific: Not Supported 00:09:21.300 Reset Timeout: 7500 ms 00:09:21.300 Doorbell Stride: 4 bytes 00:09:21.300 NVM Subsystem Reset: Not Supported 00:09:21.300 Command Sets Supported 00:09:21.300 NVM Command Set: Supported 00:09:21.300 Boot Partition: Not Supported 00:09:21.300 Memory Page Size Minimum: 4096 bytes 00:09:21.300 Memory Page Size Maximum: 65536 bytes 00:09:21.300 Persistent Memory Region: Not Supported 00:09:21.300 Optional Asynchronous Events Supported 00:09:21.300 Namespace Attribute Notices: Supported 00:09:21.300 Firmware Activation Notices: Not Supported 00:09:21.300 ANA Change Notices: Not Supported 00:09:21.300 PLE Aggregate Log Change Notices: Not Supported 00:09:21.300 LBA Status Info Alert Notices: Not Supported 00:09:21.300 EGE Aggregate Log Change Notices: Not Supported 00:09:21.300 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.300 Zone Descriptor Change Notices: Not Supported 00:09:21.300 Discovery Log Change Notices: Not Supported 00:09:21.300 Controller Attributes 00:09:21.300 128-bit Host Identifier: Not Supported 00:09:21.300 Non-Operational Permissive Mode: Not Supported 00:09:21.300 NVM Sets: Not Supported 00:09:21.300 Read Recovery Levels: Not Supported 00:09:21.300 Endurance Groups: Not Supported 00:09:21.300 Predictable Latency Mode: Not Supported 00:09:21.300 Traffic Based Keep ALive: Not Supported 00:09:21.300 Namespace Granularity: Not Supported 00:09:21.300 SQ Associations: Not Supported 00:09:21.300 UUID List: Not Supported 00:09:21.300 Multi-Domain Subsystem: Not Supported 00:09:21.300 Fixed Capacity Management: Not Supported 00:09:21.300 Variable Capacity Management: Not Supported 00:09:21.300 Delete Endurance Group: Not Supported 00:09:21.300 Delete NVM Set: Not Supported 00:09:21.300 Extended LBA Formats Supported: Supported 00:09:21.300 Flexible Data Placement Supported: Not Supported 00:09:21.300 00:09:21.300 Controller Memory Buffer Support 00:09:21.300 ================================ 00:09:21.300 Supported: No 00:09:21.300 00:09:21.300 Persistent Memory Region Support 00:09:21.301 ================================ 00:09:21.301 Supported: No 00:09:21.301 00:09:21.301 Admin Command Set Attributes 00:09:21.301 ============================ 00:09:21.301 Security Send/Receive: Not Supported 00:09:21.301 Format NVM: Supported 00:09:21.301 Firmware Activate/Download: Not Supported 00:09:21.301 Namespace Management: Supported 00:09:21.301 Device Self-Test: Not Supported 00:09:21.301 Directives: Supported 00:09:21.301 NVMe-MI: Not Supported 00:09:21.301 Virtualization Management: Not Supported 00:09:21.301 Doorbell Buffer Config: Supported 00:09:21.301 Get LBA Status Capability: Not Supported 00:09:21.301 Command & Feature Lockdown Capability: Not Supported 00:09:21.301 Abort Command Limit: 4 00:09:21.301 Async Event Request Limit: 4 00:09:21.301 Number of Firmware Slots: N/A 00:09:21.301 Firmware Slot 1 Read-Only: N/A 00:09:21.301 Firmware Activation Without Reset: N/A 00:09:21.301 Multiple Update Detection Support: N/A 00:09:21.301 Firmware Update Granularity: No Information Provided 00:09:21.301 Per-Namespace SMART Log: Yes 00:09:21.301 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.301 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:21.301 Command Effects Log Page: Supported 00:09:21.301 Get Log Page Extended Data: Supported 00:09:21.301 Telemetry Log Pages: Not Supported 00:09:21.301 Persistent Event Log Pages: Not Supported 00:09:21.301 Supported Log Pages Log Page: May Support 00:09:21.301 Commands Supported & Effects Log Page: Not Supported 00:09:21.301 Feature Identifiers & Effects Log Page:May Support 00:09:21.301 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.301 Data Area 4 for Telemetry Log: Not Supported 00:09:21.301 Error Log Page Entries Supported: 1 00:09:21.301 Keep Alive: Not Supported 00:09:21.301 00:09:21.301 NVM Command Set Attributes 00:09:21.301 ========================== 00:09:21.301 Submission Queue Entry Size 00:09:21.301 Max: 64 00:09:21.301 Min: 64 00:09:21.301 Completion Queue Entry Size 00:09:21.301 Max: 16 00:09:21.301 Min: 16 00:09:21.301 Number of Namespaces: 256 00:09:21.301 Compare Command: Supported 00:09:21.301 Write Uncorrectable Command: Not Supported 00:09:21.301 Dataset Management Command: Supported 00:09:21.301 Write Zeroes Command: Supported 00:09:21.301 Set Features Save Field: Supported 00:09:21.301 Reservations: Not Supported 00:09:21.301 Timestamp: Supported 00:09:21.301 Copy: Supported 00:09:21.301 Volatile Write Cache: Present 00:09:21.301 Atomic Write Unit (Normal): 1 00:09:21.301 Atomic Write Unit (PFail): 1 00:09:21.301 Atomic Compare & Write Unit: 1 00:09:21.301 Fused Compare & Write: Not Supported 00:09:21.301 Scatter-Gather List 00:09:21.301 SGL Command Set: Supported 00:09:21.301 SGL Keyed: Not Supported 00:09:21.301 SGL Bit Bucket Descriptor: Not Supported 00:09:21.301 SGL Metadata Pointer: Not Supported 00:09:21.301 Oversized SGL: Not Supported 00:09:21.301 SGL Metadata Address: Not Supported 00:09:21.301 SGL Offset: Not Supported 00:09:21.301 Transport SGL Data Block: Not Supported 00:09:21.301 Replay Protected Memory Block: Not Supported 00:09:21.301 00:09:21.301 Firmware Slot Information 00:09:21.301 ========================= 00:09:21.301 Active slot: 1 00:09:21.301 Slot 1 Firmware Revision: 1.0 00:09:21.301 00:09:21.301 00:09:21.301 Commands Supported and Effects 00:09:21.301 ============================== 00:09:21.301 Admin Commands 00:09:21.301 -------------- 00:09:21.301 Delete I/O Submission Queue (00h): Supported 00:09:21.301 Create I/O Submission Queue (01h): Supported 00:09:21.301 Get Log Page (02h): Supported 00:09:21.301 Delete I/O Completion Queue (04h): Supported 00:09:21.301 Create I/O Completion Queue (05h): Supported 00:09:21.301 Identify (06h): Supported 00:09:21.301 Abort (08h): Supported 00:09:21.301 Set Features (09h): Supported 00:09:21.301 Get Features (0Ah): Supported 00:09:21.301 Asynchronous Event Request (0Ch): Supported 00:09:21.301 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.301 Directive Send (19h): Supported 00:09:21.301 Directive Receive (1Ah): Supported 00:09:21.301 Virtualization Management (1Ch): Supported 00:09:21.301 Doorbell Buffer Config (7Ch): Supported 00:09:21.301 Format NVM (80h): Supported LBA-Change 00:09:21.301 I/O Commands 00:09:21.301 ------------ 00:09:21.301 Flush (00h): Supported LBA-Change 00:09:21.301 Write (01h): Supported LBA-Change 00:09:21.301 Read (02h): Supported 00:09:21.301 Compare (05h): Supported 00:09:21.301 Write Zeroes (08h): Supported LBA-Change 00:09:21.301 Dataset Management (09h): Supported LBA-Change 00:09:21.301 Unknown (0Ch): Supported 00:09:21.301 Unknown (12h): Supported 00:09:21.301 Copy (19h): Supported LBA-Change 00:09:21.301 Unknown (1Dh): Supported LBA-Change 00:09:21.301 00:09:21.301 Error Log 00:09:21.301 ========= 00:09:21.301 00:09:21.301 Arbitration 00:09:21.301 =========== 00:09:21.301 Arbitration Burst: no limit 00:09:21.301 00:09:21.301 Power Management 00:09:21.301 ================ 00:09:21.301 Number of Power States: 1 00:09:21.301 Current Power State: Power State #0 00:09:21.301 Power State #0: 00:09:21.301 Max Power: 25.00 W 00:09:21.301 Non-Operational State: Operational 00:09:21.301 Entry Latency: 16 microseconds 00:09:21.301 Exit Latency: 4 microseconds 00:09:21.301 Relative Read Throughput: 0 00:09:21.301 Relative Read Latency: 0 00:09:21.301 Relative Write Throughput: 0 00:09:21.301 Relative Write Latency: 0 00:09:21.561 Idle Power: Not Reported 00:09:21.561 Active Power: Not Reported 00:09:21.561 Non-Operational Permissive Mode: Not Supported 00:09:21.561 00:09:21.561 Health Information 00:09:21.561 ================== 00:09:21.561 Critical Warnings: 00:09:21.561 Available Spare Space: OK 00:09:21.561 Temperature: OK 00:09:21.561 Device Reliability: OK 00:09:21.561 Read Only: No 00:09:21.561 Volatile Memory Backup: OK 00:09:21.561 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.561 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.561 Available Spare: 0% 00:09:21.561 Available Spare Threshold: 0% 00:09:21.561 Life Percentage Used: 0% 00:09:21.561 Data Units Read: 969 00:09:21.561 Data Units Written: 808 00:09:21.561 Host Read Commands: 46897 00:09:21.561 Host Write Commands: 45499 00:09:21.561 Controller Busy Time: 0 minutes 00:09:21.561 Power Cycles: 0 00:09:21.561 Power On Hours: 0 hours 00:09:21.561 Unsafe Shutdowns: 0 00:09:21.561 Unrecoverable Media Errors: 0 00:09:21.561 Lifetime Error Log Entries: 0 00:09:21.561 Warning Temperature Time: 0 minutes 00:09:21.561 Critical Temperature Time: 0 minutes 00:09:21.561 00:09:21.561 Number of Queues 00:09:21.561 ================ 00:09:21.561 Number of I/O Submission Queues: 64 00:09:21.561 Number of I/O Completion Queues: 64 00:09:21.561 00:09:21.561 ZNS Specific Controller Data 00:09:21.561 ============================ 00:09:21.561 Zone Append Size Limit: 0 00:09:21.561 00:09:21.561 00:09:21.561 Active Namespaces 00:09:21.561 ================= 00:09:21.561 Namespace ID:1 00:09:21.561 Error Recovery Timeout: Unlimited 00:09:21.561 Command Set Identifier: NVM (00h) 00:09:21.561 Deallocate: Supported 00:09:21.561 Deallocated/Unwritten Error: Supported 00:09:21.561 Deallocated Read Value: All 0x00 00:09:21.561 Deallocate in Write Zeroes: Not Supported 00:09:21.561 Deallocated Guard Field: 0xFFFF 00:09:21.561 Flush: Supported 00:09:21.561 Reservation: Not Supported 00:09:21.561 Metadata Transferred as: Separate Metadata Buffer 00:09:21.561 Namespace Sharing Capabilities: Private 00:09:21.561 Size (in LBAs): 1548666 (5GiB) 00:09:21.561 Capacity (in LBAs): 1548666 (5GiB) 00:09:21.561 Utilization (in LBAs): 1548666 (5GiB) 00:09:21.561 Thin Provisioning: Not Supported 00:09:21.561 Per-NS Atomic Units: No 00:09:21.561 Maximum Single Source Range Length: 128 00:09:21.561 Maximum Copy Length: 128 00:09:21.561 Maximum Source Range Count: 128 00:09:21.561 NGUID/EUI64 Never Reused: No 00:09:21.561 Namespace Write Protected: No 00:09:21.561 Number of LBA Formats: 8 00:09:21.561 Current LBA Format: LBA Format #07 00:09:21.561 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.561 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.561 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.561 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.561 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.561 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.561 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.561 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.561 00:09:21.561 NVM Specific Namespace Data 00:09:21.561 =========================== 00:09:21.561 Logical Block Storage Tag Mask: 0 00:09:21.561 Protection Information Capabilities: 00:09:21.561 16b Guard Protection Information Storage Tag Support: No 00:09:21.561 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.561 Storage Tag Check Read Support: No 00:09:21.561 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.561 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.561 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.561 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.562 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.562 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.562 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.562 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.562 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:21.562 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:21.562 ===================================================== 00:09:21.562 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.562 ===================================================== 00:09:21.562 Controller Capabilities/Features 00:09:21.562 ================================ 00:09:21.562 Vendor ID: 1b36 00:09:21.562 Subsystem Vendor ID: 1af4 00:09:21.562 Serial Number: 12341 00:09:21.562 Model Number: QEMU NVMe Ctrl 00:09:21.562 Firmware Version: 8.0.0 00:09:21.562 Recommended Arb Burst: 6 00:09:21.562 IEEE OUI Identifier: 00 54 52 00:09:21.562 Multi-path I/O 00:09:21.562 May have multiple subsystem ports: No 00:09:21.562 May have multiple controllers: No 00:09:21.562 Associated with SR-IOV VF: No 00:09:21.562 Max Data Transfer Size: 524288 00:09:21.562 Max Number of Namespaces: 256 00:09:21.562 Max Number of I/O Queues: 64 00:09:21.562 NVMe Specification Version (VS): 1.4 00:09:21.562 NVMe Specification Version (Identify): 1.4 00:09:21.562 Maximum Queue Entries: 2048 00:09:21.562 Contiguous Queues Required: Yes 00:09:21.562 Arbitration Mechanisms Supported 00:09:21.562 Weighted Round Robin: Not Supported 00:09:21.562 Vendor Specific: Not Supported 00:09:21.562 Reset Timeout: 7500 ms 00:09:21.562 Doorbell Stride: 4 bytes 00:09:21.562 NVM Subsystem Reset: Not Supported 00:09:21.562 Command Sets Supported 00:09:21.562 NVM Command Set: Supported 00:09:21.562 Boot Partition: Not Supported 00:09:21.562 Memory Page Size Minimum: 4096 bytes 00:09:21.562 Memory Page Size Maximum: 65536 bytes 00:09:21.562 Persistent Memory Region: Not Supported 00:09:21.562 Optional Asynchronous Events Supported 00:09:21.562 Namespace Attribute Notices: Supported 00:09:21.562 Firmware Activation Notices: Not Supported 00:09:21.562 ANA Change Notices: Not Supported 00:09:21.562 PLE Aggregate Log Change Notices: Not Supported 00:09:21.562 LBA Status Info Alert Notices: Not Supported 00:09:21.562 EGE Aggregate Log Change Notices: Not Supported 00:09:21.562 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.562 Zone Descriptor Change Notices: Not Supported 00:09:21.562 Discovery Log Change Notices: Not Supported 00:09:21.562 Controller Attributes 00:09:21.562 128-bit Host Identifier: Not Supported 00:09:21.562 Non-Operational Permissive Mode: Not Supported 00:09:21.562 NVM Sets: Not Supported 00:09:21.562 Read Recovery Levels: Not Supported 00:09:21.562 Endurance Groups: Not Supported 00:09:21.562 Predictable Latency Mode: Not Supported 00:09:21.562 Traffic Based Keep ALive: Not Supported 00:09:21.562 Namespace Granularity: Not Supported 00:09:21.562 SQ Associations: Not Supported 00:09:21.562 UUID List: Not Supported 00:09:21.562 Multi-Domain Subsystem: Not Supported 00:09:21.562 Fixed Capacity Management: Not Supported 00:09:21.562 Variable Capacity Management: Not Supported 00:09:21.562 Delete Endurance Group: Not Supported 00:09:21.562 Delete NVM Set: Not Supported 00:09:21.562 Extended LBA Formats Supported: Supported 00:09:21.562 Flexible Data Placement Supported: Not Supported 00:09:21.562 00:09:21.562 Controller Memory Buffer Support 00:09:21.562 ================================ 00:09:21.562 Supported: No 00:09:21.562 00:09:21.562 Persistent Memory Region Support 00:09:21.562 ================================ 00:09:21.562 Supported: No 00:09:21.562 00:09:21.562 Admin Command Set Attributes 00:09:21.562 ============================ 00:09:21.562 Security Send/Receive: Not Supported 00:09:21.562 Format NVM: Supported 00:09:21.562 Firmware Activate/Download: Not Supported 00:09:21.562 Namespace Management: Supported 00:09:21.562 Device Self-Test: Not Supported 00:09:21.562 Directives: Supported 00:09:21.562 NVMe-MI: Not Supported 00:09:21.562 Virtualization Management: Not Supported 00:09:21.562 Doorbell Buffer Config: Supported 00:09:21.562 Get LBA Status Capability: Not Supported 00:09:21.562 Command & Feature Lockdown Capability: Not Supported 00:09:21.562 Abort Command Limit: 4 00:09:21.562 Async Event Request Limit: 4 00:09:21.562 Number of Firmware Slots: N/A 00:09:21.562 Firmware Slot 1 Read-Only: N/A 00:09:21.562 Firmware Activation Without Reset: N/A 00:09:21.562 Multiple Update Detection Support: N/A 00:09:21.562 Firmware Update Granularity: No Information Provided 00:09:21.562 Per-Namespace SMART Log: Yes 00:09:21.562 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.562 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:21.562 Command Effects Log Page: Supported 00:09:21.562 Get Log Page Extended Data: Supported 00:09:21.562 Telemetry Log Pages: Not Supported 00:09:21.562 Persistent Event Log Pages: Not Supported 00:09:21.562 Supported Log Pages Log Page: May Support 00:09:21.562 Commands Supported & Effects Log Page: Not Supported 00:09:21.562 Feature Identifiers & Effects Log Page:May Support 00:09:21.562 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.562 Data Area 4 for Telemetry Log: Not Supported 00:09:21.562 Error Log Page Entries Supported: 1 00:09:21.562 Keep Alive: Not Supported 00:09:21.562 00:09:21.562 NVM Command Set Attributes 00:09:21.562 ========================== 00:09:21.562 Submission Queue Entry Size 00:09:21.562 Max: 64 00:09:21.562 Min: 64 00:09:21.562 Completion Queue Entry Size 00:09:21.562 Max: 16 00:09:21.562 Min: 16 00:09:21.562 Number of Namespaces: 256 00:09:21.562 Compare Command: Supported 00:09:21.562 Write Uncorrectable Command: Not Supported 00:09:21.562 Dataset Management Command: Supported 00:09:21.562 Write Zeroes Command: Supported 00:09:21.562 Set Features Save Field: Supported 00:09:21.562 Reservations: Not Supported 00:09:21.562 Timestamp: Supported 00:09:21.562 Copy: Supported 00:09:21.562 Volatile Write Cache: Present 00:09:21.562 Atomic Write Unit (Normal): 1 00:09:21.562 Atomic Write Unit (PFail): 1 00:09:21.562 Atomic Compare & Write Unit: 1 00:09:21.562 Fused Compare & Write: Not Supported 00:09:21.562 Scatter-Gather List 00:09:21.562 SGL Command Set: Supported 00:09:21.562 SGL Keyed: Not Supported 00:09:21.562 SGL Bit Bucket Descriptor: Not Supported 00:09:21.562 SGL Metadata Pointer: Not Supported 00:09:21.562 Oversized SGL: Not Supported 00:09:21.562 SGL Metadata Address: Not Supported 00:09:21.562 SGL Offset: Not Supported 00:09:21.562 Transport SGL Data Block: Not Supported 00:09:21.562 Replay Protected Memory Block: Not Supported 00:09:21.562 00:09:21.562 Firmware Slot Information 00:09:21.562 ========================= 00:09:21.562 Active slot: 1 00:09:21.562 Slot 1 Firmware Revision: 1.0 00:09:21.562 00:09:21.562 00:09:21.562 Commands Supported and Effects 00:09:21.562 ============================== 00:09:21.562 Admin Commands 00:09:21.562 -------------- 00:09:21.562 Delete I/O Submission Queue (00h): Supported 00:09:21.562 Create I/O Submission Queue (01h): Supported 00:09:21.562 Get Log Page (02h): Supported 00:09:21.562 Delete I/O Completion Queue (04h): Supported 00:09:21.562 Create I/O Completion Queue (05h): Supported 00:09:21.562 Identify (06h): Supported 00:09:21.562 Abort (08h): Supported 00:09:21.562 Set Features (09h): Supported 00:09:21.562 Get Features (0Ah): Supported 00:09:21.562 Asynchronous Event Request (0Ch): Supported 00:09:21.562 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.562 Directive Send (19h): Supported 00:09:21.562 Directive Receive (1Ah): Supported 00:09:21.562 Virtualization Management (1Ch): Supported 00:09:21.562 Doorbell Buffer Config (7Ch): Supported 00:09:21.562 Format NVM (80h): Supported LBA-Change 00:09:21.562 I/O Commands 00:09:21.562 ------------ 00:09:21.562 Flush (00h): Supported LBA-Change 00:09:21.562 Write (01h): Supported LBA-Change 00:09:21.562 Read (02h): Supported 00:09:21.562 Compare (05h): Supported 00:09:21.562 Write Zeroes (08h): Supported LBA-Change 00:09:21.562 Dataset Management (09h): Supported LBA-Change 00:09:21.562 Unknown (0Ch): Supported 00:09:21.562 Unknown (12h): Supported 00:09:21.562 Copy (19h): Supported LBA-Change 00:09:21.562 Unknown (1Dh): Supported LBA-Change 00:09:21.562 00:09:21.562 Error Log 00:09:21.562 ========= 00:09:21.562 00:09:21.562 Arbitration 00:09:21.562 =========== 00:09:21.562 Arbitration Burst: no limit 00:09:21.562 00:09:21.562 Power Management 00:09:21.562 ================ 00:09:21.562 Number of Power States: 1 00:09:21.562 Current Power State: Power State #0 00:09:21.562 Power State #0: 00:09:21.562 Max Power: 25.00 W 00:09:21.562 Non-Operational State: Operational 00:09:21.562 Entry Latency: 16 microseconds 00:09:21.562 Exit Latency: 4 microseconds 00:09:21.562 Relative Read Throughput: 0 00:09:21.562 Relative Read Latency: 0 00:09:21.562 Relative Write Throughput: 0 00:09:21.562 Relative Write Latency: 0 00:09:21.824 Idle Power: Not Reported 00:09:21.824 Active Power: Not Reported 00:09:21.824 Non-Operational Permissive Mode: Not Supported 00:09:21.824 00:09:21.824 Health Information 00:09:21.824 ================== 00:09:21.824 Critical Warnings: 00:09:21.824 Available Spare Space: OK 00:09:21.824 Temperature: OK 00:09:21.824 Device Reliability: OK 00:09:21.824 Read Only: No 00:09:21.824 Volatile Memory Backup: OK 00:09:21.824 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.824 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.824 Available Spare: 0% 00:09:21.824 Available Spare Threshold: 0% 00:09:21.824 Life Percentage Used: 0% 00:09:21.824 Data Units Read: 696 00:09:21.824 Data Units Written: 547 00:09:21.824 Host Read Commands: 33371 00:09:21.824 Host Write Commands: 31145 00:09:21.824 Controller Busy Time: 0 minutes 00:09:21.824 Power Cycles: 0 00:09:21.824 Power On Hours: 0 hours 00:09:21.824 Unsafe Shutdowns: 0 00:09:21.824 Unrecoverable Media Errors: 0 00:09:21.824 Lifetime Error Log Entries: 0 00:09:21.824 Warning Temperature Time: 0 minutes 00:09:21.824 Critical Temperature Time: 0 minutes 00:09:21.824 00:09:21.824 Number of Queues 00:09:21.824 ================ 00:09:21.824 Number of I/O Submission Queues: 64 00:09:21.824 Number of I/O Completion Queues: 64 00:09:21.824 00:09:21.824 ZNS Specific Controller Data 00:09:21.824 ============================ 00:09:21.824 Zone Append Size Limit: 0 00:09:21.824 00:09:21.824 00:09:21.824 Active Namespaces 00:09:21.824 ================= 00:09:21.824 Namespace ID:1 00:09:21.824 Error Recovery Timeout: Unlimited 00:09:21.824 Command Set Identifier: NVM (00h) 00:09:21.824 Deallocate: Supported 00:09:21.824 Deallocated/Unwritten Error: Supported 00:09:21.824 Deallocated Read Value: All 0x00 00:09:21.824 Deallocate in Write Zeroes: Not Supported 00:09:21.824 Deallocated Guard Field: 0xFFFF 00:09:21.824 Flush: Supported 00:09:21.824 Reservation: Not Supported 00:09:21.824 Namespace Sharing Capabilities: Private 00:09:21.824 Size (in LBAs): 1310720 (5GiB) 00:09:21.824 Capacity (in LBAs): 1310720 (5GiB) 00:09:21.824 Utilization (in LBAs): 1310720 (5GiB) 00:09:21.824 Thin Provisioning: Not Supported 00:09:21.824 Per-NS Atomic Units: No 00:09:21.824 Maximum Single Source Range Length: 128 00:09:21.824 Maximum Copy Length: 128 00:09:21.824 Maximum Source Range Count: 128 00:09:21.824 NGUID/EUI64 Never Reused: No 00:09:21.824 Namespace Write Protected: No 00:09:21.824 Number of LBA Formats: 8 00:09:21.824 Current LBA Format: LBA Format #04 00:09:21.824 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.824 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.824 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.824 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.824 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.824 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.824 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.824 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.824 00:09:21.824 NVM Specific Namespace Data 00:09:21.824 =========================== 00:09:21.824 Logical Block Storage Tag Mask: 0 00:09:21.824 Protection Information Capabilities: 00:09:21.824 16b Guard Protection Information Storage Tag Support: No 00:09:21.824 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.824 Storage Tag Check Read Support: No 00:09:21.824 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.824 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:21.824 18:16:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:21.824 ===================================================== 00:09:21.824 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.824 ===================================================== 00:09:21.824 Controller Capabilities/Features 00:09:21.824 ================================ 00:09:21.824 Vendor ID: 1b36 00:09:21.824 Subsystem Vendor ID: 1af4 00:09:21.824 Serial Number: 12342 00:09:21.824 Model Number: QEMU NVMe Ctrl 00:09:21.824 Firmware Version: 8.0.0 00:09:21.824 Recommended Arb Burst: 6 00:09:21.824 IEEE OUI Identifier: 00 54 52 00:09:21.824 Multi-path I/O 00:09:21.824 May have multiple subsystem ports: No 00:09:21.824 May have multiple controllers: No 00:09:21.824 Associated with SR-IOV VF: No 00:09:21.824 Max Data Transfer Size: 524288 00:09:21.824 Max Number of Namespaces: 256 00:09:21.824 Max Number of I/O Queues: 64 00:09:21.824 NVMe Specification Version (VS): 1.4 00:09:21.824 NVMe Specification Version (Identify): 1.4 00:09:21.824 Maximum Queue Entries: 2048 00:09:21.824 Contiguous Queues Required: Yes 00:09:21.824 Arbitration Mechanisms Supported 00:09:21.824 Weighted Round Robin: Not Supported 00:09:21.824 Vendor Specific: Not Supported 00:09:21.824 Reset Timeout: 7500 ms 00:09:21.824 Doorbell Stride: 4 bytes 00:09:21.824 NVM Subsystem Reset: Not Supported 00:09:21.824 Command Sets Supported 00:09:21.824 NVM Command Set: Supported 00:09:21.824 Boot Partition: Not Supported 00:09:21.824 Memory Page Size Minimum: 4096 bytes 00:09:21.824 Memory Page Size Maximum: 65536 bytes 00:09:21.824 Persistent Memory Region: Not Supported 00:09:21.824 Optional Asynchronous Events Supported 00:09:21.824 Namespace Attribute Notices: Supported 00:09:21.824 Firmware Activation Notices: Not Supported 00:09:21.824 ANA Change Notices: Not Supported 00:09:21.824 PLE Aggregate Log Change Notices: Not Supported 00:09:21.824 LBA Status Info Alert Notices: Not Supported 00:09:21.824 EGE Aggregate Log Change Notices: Not Supported 00:09:21.824 Normal NVM Subsystem Shutdown event: Not Supported 00:09:21.824 Zone Descriptor Change Notices: Not Supported 00:09:21.824 Discovery Log Change Notices: Not Supported 00:09:21.824 Controller Attributes 00:09:21.824 128-bit Host Identifier: Not Supported 00:09:21.824 Non-Operational Permissive Mode: Not Supported 00:09:21.824 NVM Sets: Not Supported 00:09:21.824 Read Recovery Levels: Not Supported 00:09:21.824 Endurance Groups: Not Supported 00:09:21.824 Predictable Latency Mode: Not Supported 00:09:21.824 Traffic Based Keep ALive: Not Supported 00:09:21.824 Namespace Granularity: Not Supported 00:09:21.824 SQ Associations: Not Supported 00:09:21.824 UUID List: Not Supported 00:09:21.824 Multi-Domain Subsystem: Not Supported 00:09:21.824 Fixed Capacity Management: Not Supported 00:09:21.824 Variable Capacity Management: Not Supported 00:09:21.824 Delete Endurance Group: Not Supported 00:09:21.824 Delete NVM Set: Not Supported 00:09:21.824 Extended LBA Formats Supported: Supported 00:09:21.824 Flexible Data Placement Supported: Not Supported 00:09:21.824 00:09:21.824 Controller Memory Buffer Support 00:09:21.824 ================================ 00:09:21.824 Supported: No 00:09:21.824 00:09:21.824 Persistent Memory Region Support 00:09:21.824 ================================ 00:09:21.824 Supported: No 00:09:21.824 00:09:21.824 Admin Command Set Attributes 00:09:21.824 ============================ 00:09:21.824 Security Send/Receive: Not Supported 00:09:21.824 Format NVM: Supported 00:09:21.824 Firmware Activate/Download: Not Supported 00:09:21.824 Namespace Management: Supported 00:09:21.824 Device Self-Test: Not Supported 00:09:21.824 Directives: Supported 00:09:21.824 NVMe-MI: Not Supported 00:09:21.824 Virtualization Management: Not Supported 00:09:21.824 Doorbell Buffer Config: Supported 00:09:21.824 Get LBA Status Capability: Not Supported 00:09:21.824 Command & Feature Lockdown Capability: Not Supported 00:09:21.824 Abort Command Limit: 4 00:09:21.824 Async Event Request Limit: 4 00:09:21.824 Number of Firmware Slots: N/A 00:09:21.824 Firmware Slot 1 Read-Only: N/A 00:09:21.824 Firmware Activation Without Reset: N/A 00:09:21.824 Multiple Update Detection Support: N/A 00:09:21.824 Firmware Update Granularity: No Information Provided 00:09:21.824 Per-Namespace SMART Log: Yes 00:09:21.824 Asymmetric Namespace Access Log Page: Not Supported 00:09:21.824 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:21.824 Command Effects Log Page: Supported 00:09:21.824 Get Log Page Extended Data: Supported 00:09:21.824 Telemetry Log Pages: Not Supported 00:09:21.824 Persistent Event Log Pages: Not Supported 00:09:21.824 Supported Log Pages Log Page: May Support 00:09:21.824 Commands Supported & Effects Log Page: Not Supported 00:09:21.824 Feature Identifiers & Effects Log Page:May Support 00:09:21.824 NVMe-MI Commands & Effects Log Page: May Support 00:09:21.824 Data Area 4 for Telemetry Log: Not Supported 00:09:21.824 Error Log Page Entries Supported: 1 00:09:21.824 Keep Alive: Not Supported 00:09:21.824 00:09:21.824 NVM Command Set Attributes 00:09:21.824 ========================== 00:09:21.824 Submission Queue Entry Size 00:09:21.824 Max: 64 00:09:21.824 Min: 64 00:09:21.824 Completion Queue Entry Size 00:09:21.824 Max: 16 00:09:21.824 Min: 16 00:09:21.824 Number of Namespaces: 256 00:09:21.824 Compare Command: Supported 00:09:21.824 Write Uncorrectable Command: Not Supported 00:09:21.824 Dataset Management Command: Supported 00:09:21.824 Write Zeroes Command: Supported 00:09:21.824 Set Features Save Field: Supported 00:09:21.824 Reservations: Not Supported 00:09:21.824 Timestamp: Supported 00:09:21.824 Copy: Supported 00:09:21.824 Volatile Write Cache: Present 00:09:21.824 Atomic Write Unit (Normal): 1 00:09:21.824 Atomic Write Unit (PFail): 1 00:09:21.824 Atomic Compare & Write Unit: 1 00:09:21.824 Fused Compare & Write: Not Supported 00:09:21.824 Scatter-Gather List 00:09:21.824 SGL Command Set: Supported 00:09:21.824 SGL Keyed: Not Supported 00:09:21.824 SGL Bit Bucket Descriptor: Not Supported 00:09:21.824 SGL Metadata Pointer: Not Supported 00:09:21.824 Oversized SGL: Not Supported 00:09:21.824 SGL Metadata Address: Not Supported 00:09:21.824 SGL Offset: Not Supported 00:09:21.824 Transport SGL Data Block: Not Supported 00:09:21.824 Replay Protected Memory Block: Not Supported 00:09:21.824 00:09:21.824 Firmware Slot Information 00:09:21.824 ========================= 00:09:21.824 Active slot: 1 00:09:21.824 Slot 1 Firmware Revision: 1.0 00:09:21.824 00:09:21.824 00:09:21.824 Commands Supported and Effects 00:09:21.824 ============================== 00:09:21.824 Admin Commands 00:09:21.824 -------------- 00:09:21.824 Delete I/O Submission Queue (00h): Supported 00:09:21.824 Create I/O Submission Queue (01h): Supported 00:09:21.824 Get Log Page (02h): Supported 00:09:21.824 Delete I/O Completion Queue (04h): Supported 00:09:21.824 Create I/O Completion Queue (05h): Supported 00:09:21.824 Identify (06h): Supported 00:09:21.824 Abort (08h): Supported 00:09:21.824 Set Features (09h): Supported 00:09:21.824 Get Features (0Ah): Supported 00:09:21.824 Asynchronous Event Request (0Ch): Supported 00:09:21.824 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:21.824 Directive Send (19h): Supported 00:09:21.824 Directive Receive (1Ah): Supported 00:09:21.824 Virtualization Management (1Ch): Supported 00:09:21.824 Doorbell Buffer Config (7Ch): Supported 00:09:21.824 Format NVM (80h): Supported LBA-Change 00:09:21.824 I/O Commands 00:09:21.824 ------------ 00:09:21.824 Flush (00h): Supported LBA-Change 00:09:21.824 Write (01h): Supported LBA-Change 00:09:21.824 Read (02h): Supported 00:09:21.824 Compare (05h): Supported 00:09:21.824 Write Zeroes (08h): Supported LBA-Change 00:09:21.824 Dataset Management (09h): Supported LBA-Change 00:09:21.824 Unknown (0Ch): Supported 00:09:21.824 Unknown (12h): Supported 00:09:21.824 Copy (19h): Supported LBA-Change 00:09:21.824 Unknown (1Dh): Supported LBA-Change 00:09:21.824 00:09:21.824 Error Log 00:09:21.824 ========= 00:09:21.824 00:09:21.824 Arbitration 00:09:21.824 =========== 00:09:21.824 Arbitration Burst: no limit 00:09:21.824 00:09:21.824 Power Management 00:09:21.824 ================ 00:09:21.824 Number of Power States: 1 00:09:21.824 Current Power State: Power State #0 00:09:21.824 Power State #0: 00:09:21.824 Max Power: 25.00 W 00:09:21.824 Non-Operational State: Operational 00:09:21.824 Entry Latency: 16 microseconds 00:09:21.824 Exit Latency: 4 microseconds 00:09:21.824 Relative Read Throughput: 0 00:09:21.824 Relative Read Latency: 0 00:09:21.824 Relative Write Throughput: 0 00:09:21.824 Relative Write Latency: 0 00:09:21.824 Idle Power: Not Reported 00:09:21.824 Active Power: Not Reported 00:09:21.824 Non-Operational Permissive Mode: Not Supported 00:09:21.824 00:09:21.824 Health Information 00:09:21.824 ================== 00:09:21.824 Critical Warnings: 00:09:21.824 Available Spare Space: OK 00:09:21.824 Temperature: OK 00:09:21.824 Device Reliability: OK 00:09:21.824 Read Only: No 00:09:21.824 Volatile Memory Backup: OK 00:09:21.824 Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.825 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:21.825 Available Spare: 0% 00:09:21.825 Available Spare Threshold: 0% 00:09:21.825 Life Percentage Used: 0% 00:09:21.825 Data Units Read: 2152 00:09:21.825 Data Units Written: 1832 00:09:21.825 Host Read Commands: 99177 00:09:21.825 Host Write Commands: 94947 00:09:21.825 Controller Busy Time: 0 minutes 00:09:21.825 Power Cycles: 0 00:09:21.825 Power On Hours: 0 hours 00:09:21.825 Unsafe Shutdowns: 0 00:09:21.825 Unrecoverable Media Errors: 0 00:09:21.825 Lifetime Error Log Entries: 0 00:09:21.825 Warning Temperature Time: 0 minutes 00:09:21.825 Critical Temperature Time: 0 minutes 00:09:21.825 00:09:21.825 Number of Queues 00:09:21.825 ================ 00:09:21.825 Number of I/O Submission Queues: 64 00:09:21.825 Number of I/O Completion Queues: 64 00:09:21.825 00:09:21.825 ZNS Specific Controller Data 00:09:21.825 ============================ 00:09:21.825 Zone Append Size Limit: 0 00:09:21.825 00:09:21.825 00:09:21.825 Active Namespaces 00:09:21.825 ================= 00:09:21.825 Namespace ID:1 00:09:21.825 Error Recovery Timeout: Unlimited 00:09:21.825 Command Set Identifier: NVM (00h) 00:09:21.825 Deallocate: Supported 00:09:21.825 Deallocated/Unwritten Error: Supported 00:09:21.825 Deallocated Read Value: All 0x00 00:09:21.825 Deallocate in Write Zeroes: Not Supported 00:09:21.825 Deallocated Guard Field: 0xFFFF 00:09:21.825 Flush: Supported 00:09:21.825 Reservation: Not Supported 00:09:21.825 Namespace Sharing Capabilities: Private 00:09:21.825 Size (in LBAs): 1048576 (4GiB) 00:09:21.825 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.825 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.825 Thin Provisioning: Not Supported 00:09:21.825 Per-NS Atomic Units: No 00:09:21.825 Maximum Single Source Range Length: 128 00:09:21.825 Maximum Copy Length: 128 00:09:21.825 Maximum Source Range Count: 128 00:09:21.825 NGUID/EUI64 Never Reused: No 00:09:21.825 Namespace Write Protected: No 00:09:21.825 Number of LBA Formats: 8 00:09:21.825 Current LBA Format: LBA Format #04 00:09:21.825 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.825 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.825 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.825 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.825 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.825 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.825 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.825 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.825 00:09:21.825 NVM Specific Namespace Data 00:09:21.825 =========================== 00:09:21.825 Logical Block Storage Tag Mask: 0 00:09:21.825 Protection Information Capabilities: 00:09:21.825 16b Guard Protection Information Storage Tag Support: No 00:09:21.825 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.825 Storage Tag Check Read Support: No 00:09:21.825 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Namespace ID:2 00:09:21.825 Error Recovery Timeout: Unlimited 00:09:21.825 Command Set Identifier: NVM (00h) 00:09:21.825 Deallocate: Supported 00:09:21.825 Deallocated/Unwritten Error: Supported 00:09:21.825 Deallocated Read Value: All 0x00 00:09:21.825 Deallocate in Write Zeroes: Not Supported 00:09:21.825 Deallocated Guard Field: 0xFFFF 00:09:21.825 Flush: Supported 00:09:21.825 Reservation: Not Supported 00:09:21.825 Namespace Sharing Capabilities: Private 00:09:21.825 Size (in LBAs): 1048576 (4GiB) 00:09:21.825 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.825 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.825 Thin Provisioning: Not Supported 00:09:21.825 Per-NS Atomic Units: No 00:09:21.825 Maximum Single Source Range Length: 128 00:09:21.825 Maximum Copy Length: 128 00:09:21.825 Maximum Source Range Count: 128 00:09:21.825 NGUID/EUI64 Never Reused: No 00:09:21.825 Namespace Write Protected: No 00:09:21.825 Number of LBA Formats: 8 00:09:21.825 Current LBA Format: LBA Format #04 00:09:21.825 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.825 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.825 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.825 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.825 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.825 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.825 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.825 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.825 00:09:21.825 NVM Specific Namespace Data 00:09:21.825 =========================== 00:09:21.825 Logical Block Storage Tag Mask: 0 00:09:21.825 Protection Information Capabilities: 00:09:21.825 16b Guard Protection Information Storage Tag Support: No 00:09:21.825 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:21.825 Storage Tag Check Read Support: No 00:09:21.825 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:21.825 Namespace ID:3 00:09:21.825 Error Recovery Timeout: Unlimited 00:09:21.825 Command Set Identifier: NVM (00h) 00:09:21.825 Deallocate: Supported 00:09:21.825 Deallocated/Unwritten Error: Supported 00:09:21.825 Deallocated Read Value: All 0x00 00:09:21.825 Deallocate in Write Zeroes: Not Supported 00:09:21.825 Deallocated Guard Field: 0xFFFF 00:09:21.825 Flush: Supported 00:09:21.825 Reservation: Not Supported 00:09:21.825 Namespace Sharing Capabilities: Private 00:09:21.825 Size (in LBAs): 1048576 (4GiB) 00:09:21.825 Capacity (in LBAs): 1048576 (4GiB) 00:09:21.825 Utilization (in LBAs): 1048576 (4GiB) 00:09:21.825 Thin Provisioning: Not Supported 00:09:21.825 Per-NS Atomic Units: No 00:09:21.825 Maximum Single Source Range Length: 128 00:09:21.825 Maximum Copy Length: 128 00:09:21.825 Maximum Source Range Count: 128 00:09:21.825 NGUID/EUI64 Never Reused: No 00:09:21.825 Namespace Write Protected: No 00:09:21.825 Number of LBA Formats: 8 00:09:21.825 Current LBA Format: LBA Format #04 00:09:21.825 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:21.825 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:21.825 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:21.825 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:21.825 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:21.825 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:21.825 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:21.825 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:21.825 00:09:21.825 NVM Specific Namespace Data 00:09:21.825 =========================== 00:09:21.825 Logical Block Storage Tag Mask: 0 00:09:21.825 Protection Information Capabilities: 00:09:21.825 16b Guard Protection Information Storage Tag Support: No 00:09:21.825 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.085 Storage Tag Check Read Support: No 00:09:22.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.085 18:16:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:22.085 18:16:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:22.085 ===================================================== 00:09:22.085 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.085 ===================================================== 00:09:22.085 Controller Capabilities/Features 00:09:22.085 ================================ 00:09:22.085 Vendor ID: 1b36 00:09:22.085 Subsystem Vendor ID: 1af4 00:09:22.085 Serial Number: 12343 00:09:22.085 Model Number: QEMU NVMe Ctrl 00:09:22.085 Firmware Version: 8.0.0 00:09:22.085 Recommended Arb Burst: 6 00:09:22.085 IEEE OUI Identifier: 00 54 52 00:09:22.085 Multi-path I/O 00:09:22.085 May have multiple subsystem ports: No 00:09:22.085 May have multiple controllers: Yes 00:09:22.085 Associated with SR-IOV VF: No 00:09:22.085 Max Data Transfer Size: 524288 00:09:22.085 Max Number of Namespaces: 256 00:09:22.085 Max Number of I/O Queues: 64 00:09:22.085 NVMe Specification Version (VS): 1.4 00:09:22.085 NVMe Specification Version (Identify): 1.4 00:09:22.085 Maximum Queue Entries: 2048 00:09:22.085 Contiguous Queues Required: Yes 00:09:22.085 Arbitration Mechanisms Supported 00:09:22.085 Weighted Round Robin: Not Supported 00:09:22.085 Vendor Specific: Not Supported 00:09:22.085 Reset Timeout: 7500 ms 00:09:22.085 Doorbell Stride: 4 bytes 00:09:22.085 NVM Subsystem Reset: Not Supported 00:09:22.085 Command Sets Supported 00:09:22.085 NVM Command Set: Supported 00:09:22.085 Boot Partition: Not Supported 00:09:22.085 Memory Page Size Minimum: 4096 bytes 00:09:22.085 Memory Page Size Maximum: 65536 bytes 00:09:22.086 Persistent Memory Region: Not Supported 00:09:22.086 Optional Asynchronous Events Supported 00:09:22.086 Namespace Attribute Notices: Supported 00:09:22.086 Firmware Activation Notices: Not Supported 00:09:22.086 ANA Change Notices: Not Supported 00:09:22.086 PLE Aggregate Log Change Notices: Not Supported 00:09:22.086 LBA Status Info Alert Notices: Not Supported 00:09:22.086 EGE Aggregate Log Change Notices: Not Supported 00:09:22.086 Normal NVM Subsystem Shutdown event: Not Supported 00:09:22.086 Zone Descriptor Change Notices: Not Supported 00:09:22.086 Discovery Log Change Notices: Not Supported 00:09:22.086 Controller Attributes 00:09:22.086 128-bit Host Identifier: Not Supported 00:09:22.086 Non-Operational Permissive Mode: Not Supported 00:09:22.086 NVM Sets: Not Supported 00:09:22.086 Read Recovery Levels: Not Supported 00:09:22.086 Endurance Groups: Supported 00:09:22.086 Predictable Latency Mode: Not Supported 00:09:22.086 Traffic Based Keep ALive: Not Supported 00:09:22.086 Namespace Granularity: Not Supported 00:09:22.086 SQ Associations: Not Supported 00:09:22.086 UUID List: Not Supported 00:09:22.086 Multi-Domain Subsystem: Not Supported 00:09:22.086 Fixed Capacity Management: Not Supported 00:09:22.086 Variable Capacity Management: Not Supported 00:09:22.086 Delete Endurance Group: Not Supported 00:09:22.086 Delete NVM Set: Not Supported 00:09:22.086 Extended LBA Formats Supported: Supported 00:09:22.086 Flexible Data Placement Supported: Supported 00:09:22.086 00:09:22.086 Controller Memory Buffer Support 00:09:22.086 ================================ 00:09:22.086 Supported: No 00:09:22.086 00:09:22.086 Persistent Memory Region Support 00:09:22.086 ================================ 00:09:22.086 Supported: No 00:09:22.086 00:09:22.086 Admin Command Set Attributes 00:09:22.086 ============================ 00:09:22.086 Security Send/Receive: Not Supported 00:09:22.086 Format NVM: Supported 00:09:22.086 Firmware Activate/Download: Not Supported 00:09:22.086 Namespace Management: Supported 00:09:22.086 Device Self-Test: Not Supported 00:09:22.086 Directives: Supported 00:09:22.086 NVMe-MI: Not Supported 00:09:22.086 Virtualization Management: Not Supported 00:09:22.086 Doorbell Buffer Config: Supported 00:09:22.086 Get LBA Status Capability: Not Supported 00:09:22.086 Command & Feature Lockdown Capability: Not Supported 00:09:22.086 Abort Command Limit: 4 00:09:22.086 Async Event Request Limit: 4 00:09:22.086 Number of Firmware Slots: N/A 00:09:22.086 Firmware Slot 1 Read-Only: N/A 00:09:22.086 Firmware Activation Without Reset: N/A 00:09:22.086 Multiple Update Detection Support: N/A 00:09:22.086 Firmware Update Granularity: No Information Provided 00:09:22.086 Per-Namespace SMART Log: Yes 00:09:22.086 Asymmetric Namespace Access Log Page: Not Supported 00:09:22.086 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:22.086 Command Effects Log Page: Supported 00:09:22.086 Get Log Page Extended Data: Supported 00:09:22.086 Telemetry Log Pages: Not Supported 00:09:22.086 Persistent Event Log Pages: Not Supported 00:09:22.086 Supported Log Pages Log Page: May Support 00:09:22.086 Commands Supported & Effects Log Page: Not Supported 00:09:22.086 Feature Identifiers & Effects Log Page:May Support 00:09:22.086 NVMe-MI Commands & Effects Log Page: May Support 00:09:22.086 Data Area 4 for Telemetry Log: Not Supported 00:09:22.086 Error Log Page Entries Supported: 1 00:09:22.086 Keep Alive: Not Supported 00:09:22.086 00:09:22.086 NVM Command Set Attributes 00:09:22.086 ========================== 00:09:22.086 Submission Queue Entry Size 00:09:22.086 Max: 64 00:09:22.086 Min: 64 00:09:22.086 Completion Queue Entry Size 00:09:22.086 Max: 16 00:09:22.086 Min: 16 00:09:22.086 Number of Namespaces: 256 00:09:22.086 Compare Command: Supported 00:09:22.086 Write Uncorrectable Command: Not Supported 00:09:22.086 Dataset Management Command: Supported 00:09:22.086 Write Zeroes Command: Supported 00:09:22.086 Set Features Save Field: Supported 00:09:22.086 Reservations: Not Supported 00:09:22.086 Timestamp: Supported 00:09:22.086 Copy: Supported 00:09:22.086 Volatile Write Cache: Present 00:09:22.086 Atomic Write Unit (Normal): 1 00:09:22.086 Atomic Write Unit (PFail): 1 00:09:22.086 Atomic Compare & Write Unit: 1 00:09:22.086 Fused Compare & Write: Not Supported 00:09:22.086 Scatter-Gather List 00:09:22.086 SGL Command Set: Supported 00:09:22.086 SGL Keyed: Not Supported 00:09:22.086 SGL Bit Bucket Descriptor: Not Supported 00:09:22.086 SGL Metadata Pointer: Not Supported 00:09:22.086 Oversized SGL: Not Supported 00:09:22.086 SGL Metadata Address: Not Supported 00:09:22.086 SGL Offset: Not Supported 00:09:22.086 Transport SGL Data Block: Not Supported 00:09:22.086 Replay Protected Memory Block: Not Supported 00:09:22.086 00:09:22.086 Firmware Slot Information 00:09:22.086 ========================= 00:09:22.086 Active slot: 1 00:09:22.086 Slot 1 Firmware Revision: 1.0 00:09:22.086 00:09:22.086 00:09:22.086 Commands Supported and Effects 00:09:22.086 ============================== 00:09:22.086 Admin Commands 00:09:22.086 -------------- 00:09:22.086 Delete I/O Submission Queue (00h): Supported 00:09:22.086 Create I/O Submission Queue (01h): Supported 00:09:22.086 Get Log Page (02h): Supported 00:09:22.086 Delete I/O Completion Queue (04h): Supported 00:09:22.086 Create I/O Completion Queue (05h): Supported 00:09:22.086 Identify (06h): Supported 00:09:22.086 Abort (08h): Supported 00:09:22.086 Set Features (09h): Supported 00:09:22.086 Get Features (0Ah): Supported 00:09:22.086 Asynchronous Event Request (0Ch): Supported 00:09:22.086 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:22.086 Directive Send (19h): Supported 00:09:22.086 Directive Receive (1Ah): Supported 00:09:22.086 Virtualization Management (1Ch): Supported 00:09:22.086 Doorbell Buffer Config (7Ch): Supported 00:09:22.086 Format NVM (80h): Supported LBA-Change 00:09:22.086 I/O Commands 00:09:22.086 ------------ 00:09:22.086 Flush (00h): Supported LBA-Change 00:09:22.086 Write (01h): Supported LBA-Change 00:09:22.086 Read (02h): Supported 00:09:22.086 Compare (05h): Supported 00:09:22.086 Write Zeroes (08h): Supported LBA-Change 00:09:22.086 Dataset Management (09h): Supported LBA-Change 00:09:22.086 Unknown (0Ch): Supported 00:09:22.086 Unknown (12h): Supported 00:09:22.086 Copy (19h): Supported LBA-Change 00:09:22.086 Unknown (1Dh): Supported LBA-Change 00:09:22.086 00:09:22.086 Error Log 00:09:22.086 ========= 00:09:22.086 00:09:22.086 Arbitration 00:09:22.086 =========== 00:09:22.086 Arbitration Burst: no limit 00:09:22.086 00:09:22.086 Power Management 00:09:22.086 ================ 00:09:22.086 Number of Power States: 1 00:09:22.086 Current Power State: Power State #0 00:09:22.086 Power State #0: 00:09:22.086 Max Power: 25.00 W 00:09:22.086 Non-Operational State: Operational 00:09:22.086 Entry Latency: 16 microseconds 00:09:22.086 Exit Latency: 4 microseconds 00:09:22.086 Relative Read Throughput: 0 00:09:22.086 Relative Read Latency: 0 00:09:22.086 Relative Write Throughput: 0 00:09:22.086 Relative Write Latency: 0 00:09:22.087 Idle Power: Not Reported 00:09:22.087 Active Power: Not Reported 00:09:22.087 Non-Operational Permissive Mode: Not Supported 00:09:22.087 00:09:22.087 Health Information 00:09:22.087 ================== 00:09:22.087 Critical Warnings: 00:09:22.087 Available Spare Space: OK 00:09:22.087 Temperature: OK 00:09:22.087 Device Reliability: OK 00:09:22.087 Read Only: No 00:09:22.087 Volatile Memory Backup: OK 00:09:22.087 Current Temperature: 323 Kelvin (50 Celsius) 00:09:22.087 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:22.087 Available Spare: 0% 00:09:22.087 Available Spare Threshold: 0% 00:09:22.087 Life Percentage Used: 0% 00:09:22.087 Data Units Read: 749 00:09:22.087 Data Units Written: 642 00:09:22.087 Host Read Commands: 33480 00:09:22.087 Host Write Commands: 32070 00:09:22.087 Controller Busy Time: 0 minutes 00:09:22.087 Power Cycles: 0 00:09:22.087 Power On Hours: 0 hours 00:09:22.087 Unsafe Shutdowns: 0 00:09:22.087 Unrecoverable Media Errors: 0 00:09:22.087 Lifetime Error Log Entries: 0 00:09:22.087 Warning Temperature Time: 0 minutes 00:09:22.087 Critical Temperature Time: 0 minutes 00:09:22.087 00:09:22.087 Number of Queues 00:09:22.087 ================ 00:09:22.087 Number of I/O Submission Queues: 64 00:09:22.087 Number of I/O Completion Queues: 64 00:09:22.087 00:09:22.087 ZNS Specific Controller Data 00:09:22.087 ============================ 00:09:22.087 Zone Append Size Limit: 0 00:09:22.087 00:09:22.087 00:09:22.087 Active Namespaces 00:09:22.087 ================= 00:09:22.087 Namespace ID:1 00:09:22.087 Error Recovery Timeout: Unlimited 00:09:22.087 Command Set Identifier: NVM (00h) 00:09:22.087 Deallocate: Supported 00:09:22.087 Deallocated/Unwritten Error: Supported 00:09:22.087 Deallocated Read Value: All 0x00 00:09:22.087 Deallocate in Write Zeroes: Not Supported 00:09:22.087 Deallocated Guard Field: 0xFFFF 00:09:22.087 Flush: Supported 00:09:22.087 Reservation: Not Supported 00:09:22.087 Namespace Sharing Capabilities: Multiple Controllers 00:09:22.087 Size (in LBAs): 262144 (1GiB) 00:09:22.087 Capacity (in LBAs): 262144 (1GiB) 00:09:22.087 Utilization (in LBAs): 262144 (1GiB) 00:09:22.087 Thin Provisioning: Not Supported 00:09:22.087 Per-NS Atomic Units: No 00:09:22.087 Maximum Single Source Range Length: 128 00:09:22.087 Maximum Copy Length: 128 00:09:22.087 Maximum Source Range Count: 128 00:09:22.087 NGUID/EUI64 Never Reused: No 00:09:22.087 Namespace Write Protected: No 00:09:22.087 Endurance group ID: 1 00:09:22.087 Number of LBA Formats: 8 00:09:22.087 Current LBA Format: LBA Format #04 00:09:22.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:22.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:22.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:22.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:22.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:22.087 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:22.087 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:22.087 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:22.087 00:09:22.087 Get Feature FDP: 00:09:22.087 ================ 00:09:22.087 Enabled: Yes 00:09:22.087 FDP configuration index: 0 00:09:22.087 00:09:22.087 FDP configurations log page 00:09:22.087 =========================== 00:09:22.087 Number of FDP configurations: 1 00:09:22.087 Version: 0 00:09:22.087 Size: 112 00:09:22.087 FDP Configuration Descriptor: 0 00:09:22.087 Descriptor Size: 96 00:09:22.087 Reclaim Group Identifier format: 2 00:09:22.087 FDP Volatile Write Cache: Not Present 00:09:22.087 FDP Configuration: Valid 00:09:22.087 Vendor Specific Size: 0 00:09:22.087 Number of Reclaim Groups: 2 00:09:22.087 Number of Recalim Unit Handles: 8 00:09:22.087 Max Placement Identifiers: 128 00:09:22.087 Number of Namespaces Suppprted: 256 00:09:22.087 Reclaim unit Nominal Size: 6000000 bytes 00:09:22.087 Estimated Reclaim Unit Time Limit: Not Reported 00:09:22.087 RUH Desc #000: RUH Type: Initially Isolated 00:09:22.087 RUH Desc #001: RUH Type: Initially Isolated 00:09:22.087 RUH Desc #002: RUH Type: Initially Isolated 00:09:22.087 RUH Desc #003: RUH Type: Initially Isolated 00:09:22.087 RUH Desc #004: RUH Type: Initially Isolated 00:09:22.087 RUH Desc #005: RUH Type: Initially Isolated 00:09:22.087 RUH Desc #006: RUH Type: Initially Isolated 00:09:22.087 RUH Desc #007: RUH Type: Initially Isolated 00:09:22.087 00:09:22.087 FDP reclaim unit handle usage log page 00:09:22.347 ====================================== 00:09:22.347 Number of Reclaim Unit Handles: 8 00:09:22.347 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:22.347 RUH Usage Desc #001: RUH Attributes: Unused 00:09:22.347 RUH Usage Desc #002: RUH Attributes: Unused 00:09:22.347 RUH Usage Desc #003: RUH Attributes: Unused 00:09:22.347 RUH Usage Desc #004: RUH Attributes: Unused 00:09:22.347 RUH Usage Desc #005: RUH Attributes: Unused 00:09:22.347 RUH Usage Desc #006: RUH Attributes: Unused 00:09:22.347 RUH Usage Desc #007: RUH Attributes: Unused 00:09:22.347 00:09:22.347 FDP statistics log page 00:09:22.347 ======================= 00:09:22.347 Host bytes with metadata written: 408657920 00:09:22.347 Media bytes with metadata written: 408702976 00:09:22.347 Media bytes erased: 0 00:09:22.347 00:09:22.347 FDP events log page 00:09:22.347 =================== 00:09:22.347 Number of FDP events: 0 00:09:22.347 00:09:22.347 NVM Specific Namespace Data 00:09:22.347 =========================== 00:09:22.347 Logical Block Storage Tag Mask: 0 00:09:22.347 Protection Information Capabilities: 00:09:22.347 16b Guard Protection Information Storage Tag Support: No 00:09:22.347 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:22.347 Storage Tag Check Read Support: No 00:09:22.347 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:22.347 00:09:22.347 real 0m1.396s 00:09:22.347 user 0m0.550s 00:09:22.347 sys 0m0.642s 00:09:22.347 18:16:08 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.347 18:16:08 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:22.347 ************************************ 00:09:22.347 END TEST nvme_identify 00:09:22.347 ************************************ 00:09:22.347 18:16:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:22.347 18:16:08 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:22.347 18:16:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:22.347 18:16:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.347 18:16:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.347 ************************************ 00:09:22.347 START TEST nvme_perf 00:09:22.347 ************************************ 00:09:22.347 18:16:08 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:09:22.347 18:16:08 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:23.726 Initializing NVMe Controllers 00:09:23.726 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:23.726 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:23.726 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:23.726 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:23.726 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:23.726 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:23.726 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:23.726 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:23.726 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:23.726 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:23.726 Initialization complete. Launching workers. 00:09:23.726 ======================================================== 00:09:23.726 Latency(us) 00:09:23.726 Device Information : IOPS MiB/s Average min max 00:09:23.726 PCIE (0000:00:13.0) NSID 1 from core 0: 13909.77 163.01 9203.09 5707.76 34981.85 00:09:23.726 PCIE (0000:00:10.0) NSID 1 from core 0: 13909.77 163.01 9190.29 5142.03 34411.00 00:09:23.726 PCIE (0000:00:11.0) NSID 1 from core 0: 13909.77 163.01 9178.11 4803.01 33596.99 00:09:23.726 PCIE (0000:00:12.0) NSID 1 from core 0: 13973.58 163.75 9123.20 3940.22 27464.21 00:09:23.726 PCIE (0000:00:12.0) NSID 2 from core 0: 13973.58 163.75 9109.81 3466.22 26627.51 00:09:23.726 PCIE (0000:00:12.0) NSID 3 from core 0: 13973.58 163.75 9096.08 3022.51 25751.58 00:09:23.726 ======================================================== 00:09:23.726 Total : 83650.04 980.27 9150.00 3022.51 34981.85 00:09:23.726 00:09:23.726 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:23.726 ================================================================================= 00:09:23.726 1.00000% : 8162.211us 00:09:23.726 10.00000% : 8460.102us 00:09:23.726 25.00000% : 8638.836us 00:09:23.726 50.00000% : 8936.727us 00:09:23.726 75.00000% : 9294.196us 00:09:23.726 90.00000% : 9889.978us 00:09:23.726 95.00000% : 10247.447us 00:09:23.726 98.00000% : 10545.338us 00:09:23.726 99.00000% : 11379.433us 00:09:23.726 99.50000% : 28478.371us 00:09:23.726 99.90000% : 34793.658us 00:09:23.726 99.99000% : 35031.971us 00:09:23.726 99.99900% : 35031.971us 00:09:23.726 99.99990% : 35031.971us 00:09:23.726 99.99999% : 35031.971us 00:09:23.726 00:09:23.726 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:23.726 ================================================================================= 00:09:23.726 1.00000% : 8102.633us 00:09:23.726 10.00000% : 8400.524us 00:09:23.726 25.00000% : 8638.836us 00:09:23.726 50.00000% : 8996.305us 00:09:23.726 75.00000% : 9353.775us 00:09:23.726 90.00000% : 9889.978us 00:09:23.726 95.00000% : 10307.025us 00:09:23.726 98.00000% : 10604.916us 00:09:23.726 99.00000% : 11319.855us 00:09:23.726 99.50000% : 27882.589us 00:09:23.726 99.90000% : 34078.720us 00:09:23.726 99.99000% : 34555.345us 00:09:23.726 99.99900% : 34555.345us 00:09:23.726 99.99990% : 34555.345us 00:09:23.726 99.99999% : 34555.345us 00:09:23.726 00:09:23.726 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:23.726 ================================================================================= 00:09:23.726 1.00000% : 8162.211us 00:09:23.726 10.00000% : 8460.102us 00:09:23.726 25.00000% : 8638.836us 00:09:23.726 50.00000% : 8936.727us 00:09:23.726 75.00000% : 9294.196us 00:09:23.726 90.00000% : 9889.978us 00:09:23.726 95.00000% : 10247.447us 00:09:23.726 98.00000% : 10485.760us 00:09:23.726 99.00000% : 11021.964us 00:09:23.726 99.50000% : 26929.338us 00:09:23.726 99.90000% : 33363.782us 00:09:23.726 99.99000% : 33602.095us 00:09:23.726 99.99900% : 33602.095us 00:09:23.726 99.99990% : 33602.095us 00:09:23.726 99.99999% : 33602.095us 00:09:23.726 00:09:23.726 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:23.726 ================================================================================= 00:09:23.726 1.00000% : 8102.633us 00:09:23.726 10.00000% : 8460.102us 00:09:23.726 25.00000% : 8638.836us 00:09:23.726 50.00000% : 8936.727us 00:09:23.726 75.00000% : 9294.196us 00:09:23.726 90.00000% : 9889.978us 00:09:23.726 95.00000% : 10247.447us 00:09:23.726 98.00000% : 10604.916us 00:09:23.726 99.00000% : 11677.324us 00:09:23.726 99.50000% : 20852.364us 00:09:23.726 99.90000% : 27167.651us 00:09:23.726 99.99000% : 27525.120us 00:09:23.726 99.99900% : 27525.120us 00:09:23.726 99.99990% : 27525.120us 00:09:23.726 99.99999% : 27525.120us 00:09:23.726 00:09:23.726 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:23.726 ================================================================================= 00:09:23.726 1.00000% : 8102.633us 00:09:23.726 10.00000% : 8460.102us 00:09:23.726 25.00000% : 8638.836us 00:09:23.726 50.00000% : 8936.727us 00:09:23.726 75.00000% : 9294.196us 00:09:23.726 90.00000% : 9889.978us 00:09:23.726 95.00000% : 10247.447us 00:09:23.726 98.00000% : 10545.338us 00:09:23.726 99.00000% : 11498.589us 00:09:23.726 99.50000% : 20018.269us 00:09:23.726 99.90000% : 26333.556us 00:09:23.726 99.99000% : 26691.025us 00:09:23.726 99.99900% : 26691.025us 00:09:23.726 99.99990% : 26691.025us 00:09:23.726 99.99999% : 26691.025us 00:09:23.726 00:09:23.726 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:23.726 ================================================================================= 00:09:23.726 1.00000% : 8102.633us 00:09:23.726 10.00000% : 8460.102us 00:09:23.726 25.00000% : 8638.836us 00:09:23.726 50.00000% : 8936.727us 00:09:23.726 75.00000% : 9294.196us 00:09:23.726 90.00000% : 9889.978us 00:09:23.726 95.00000% : 10247.447us 00:09:23.726 98.00000% : 10545.338us 00:09:23.726 99.00000% : 11439.011us 00:09:23.726 99.50000% : 19184.175us 00:09:23.726 99.90000% : 25380.305us 00:09:23.726 99.99000% : 25737.775us 00:09:23.726 99.99900% : 25856.931us 00:09:23.726 99.99990% : 25856.931us 00:09:23.726 99.99999% : 25856.931us 00:09:23.726 00:09:23.726 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:23.726 ============================================================================== 00:09:23.726 Range in us Cumulative IO count 00:09:23.726 5689.716 - 5719.505: 0.0143% ( 2) 00:09:23.726 5719.505 - 5749.295: 0.0358% ( 3) 00:09:23.726 5749.295 - 5779.084: 0.0502% ( 2) 00:09:23.726 5779.084 - 5808.873: 0.0717% ( 3) 00:09:23.726 5808.873 - 5838.662: 0.0932% ( 3) 00:09:23.726 5838.662 - 5868.451: 0.1147% ( 3) 00:09:23.726 5868.451 - 5898.240: 0.1362% ( 3) 00:09:23.726 5898.240 - 5928.029: 0.1577% ( 3) 00:09:23.726 5928.029 - 5957.818: 0.1792% ( 3) 00:09:23.726 5957.818 - 5987.607: 0.2007% ( 3) 00:09:23.726 5987.607 - 6017.396: 0.2150% ( 2) 00:09:23.726 6017.396 - 6047.185: 0.2365% ( 3) 00:09:23.726 6047.185 - 6076.975: 0.2580% ( 3) 00:09:23.726 6076.975 - 6106.764: 0.2795% ( 3) 00:09:23.726 6106.764 - 6136.553: 0.3010% ( 3) 00:09:23.726 6136.553 - 6166.342: 0.3154% ( 2) 00:09:23.726 6166.342 - 6196.131: 0.3369% ( 3) 00:09:23.726 6196.131 - 6225.920: 0.3584% ( 3) 00:09:23.726 6225.920 - 6255.709: 0.3799% ( 3) 00:09:23.726 6255.709 - 6285.498: 0.4014% ( 3) 00:09:23.726 6285.498 - 6315.287: 0.4229% ( 3) 00:09:23.726 6315.287 - 6345.076: 0.4444% ( 3) 00:09:23.726 6345.076 - 6374.865: 0.4515% ( 1) 00:09:23.726 6374.865 - 6404.655: 0.4587% ( 1) 00:09:23.726 7923.898 - 7983.476: 0.4874% ( 4) 00:09:23.726 7983.476 - 8043.055: 0.5161% ( 4) 00:09:23.726 8043.055 - 8102.633: 0.6737% ( 22) 00:09:23.727 8102.633 - 8162.211: 1.0608% ( 54) 00:09:23.727 8162.211 - 8221.789: 1.9137% ( 119) 00:09:23.727 8221.789 - 8281.367: 3.4977% ( 221) 00:09:23.727 8281.367 - 8340.945: 5.6336% ( 298) 00:09:23.727 8340.945 - 8400.524: 8.6941% ( 427) 00:09:23.727 8400.524 - 8460.102: 12.5358% ( 536) 00:09:23.727 8460.102 - 8519.680: 16.6858% ( 579) 00:09:23.727 8519.680 - 8579.258: 21.1869% ( 628) 00:09:23.727 8579.258 - 8638.836: 25.9246% ( 661) 00:09:23.727 8638.836 - 8698.415: 30.8343% ( 685) 00:09:23.727 8698.415 - 8757.993: 35.6795% ( 676) 00:09:23.727 8757.993 - 8817.571: 40.5748% ( 683) 00:09:23.727 8817.571 - 8877.149: 45.5419% ( 693) 00:09:23.727 8877.149 - 8936.727: 50.4659% ( 687) 00:09:23.727 8936.727 - 8996.305: 55.4401% ( 694) 00:09:23.727 8996.305 - 9055.884: 60.3426% ( 684) 00:09:23.727 9055.884 - 9115.462: 64.8796% ( 633) 00:09:23.727 9115.462 - 9175.040: 69.0797% ( 586) 00:09:23.727 9175.040 - 9234.618: 72.6562% ( 499) 00:09:23.727 9234.618 - 9294.196: 75.8458% ( 445) 00:09:23.727 9294.196 - 9353.775: 78.5694% ( 380) 00:09:23.727 9353.775 - 9413.353: 80.7124% ( 299) 00:09:23.727 9413.353 - 9472.931: 82.7552% ( 285) 00:09:23.727 9472.931 - 9532.509: 84.4252% ( 233) 00:09:23.727 9532.509 - 9592.087: 85.8157% ( 194) 00:09:23.727 9592.087 - 9651.665: 86.9481% ( 158) 00:09:23.727 9651.665 - 9711.244: 88.0161% ( 149) 00:09:23.727 9711.244 - 9770.822: 88.9550% ( 131) 00:09:23.727 9770.822 - 9830.400: 89.8581% ( 126) 00:09:23.727 9830.400 - 9889.978: 90.8042% ( 132) 00:09:23.727 9889.978 - 9949.556: 91.6643% ( 120) 00:09:23.727 9949.556 - 10009.135: 92.5602% ( 125) 00:09:23.727 10009.135 - 10068.713: 93.3200% ( 106) 00:09:23.727 10068.713 - 10128.291: 94.1012% ( 109) 00:09:23.727 10128.291 - 10187.869: 94.8036% ( 98) 00:09:23.727 10187.869 - 10247.447: 95.5347% ( 102) 00:09:23.727 10247.447 - 10307.025: 96.2514% ( 100) 00:09:23.727 10307.025 - 10366.604: 96.9108% ( 92) 00:09:23.727 10366.604 - 10426.182: 97.4842% ( 80) 00:09:23.727 10426.182 - 10485.760: 97.9143% ( 60) 00:09:23.727 10485.760 - 10545.338: 98.2368% ( 45) 00:09:23.727 10545.338 - 10604.916: 98.4733% ( 33) 00:09:23.727 10604.916 - 10664.495: 98.6024% ( 18) 00:09:23.727 10664.495 - 10724.073: 98.6955% ( 13) 00:09:23.727 10724.073 - 10783.651: 98.7744% ( 11) 00:09:23.727 10783.651 - 10843.229: 98.8245% ( 7) 00:09:23.727 10843.229 - 10902.807: 98.8532% ( 4) 00:09:23.727 10902.807 - 10962.385: 98.8747% ( 3) 00:09:23.727 10962.385 - 11021.964: 98.8890% ( 2) 00:09:23.727 11021.964 - 11081.542: 98.9106% ( 3) 00:09:23.727 11081.542 - 11141.120: 98.9321% ( 3) 00:09:23.727 11141.120 - 11200.698: 98.9536% ( 3) 00:09:23.727 11200.698 - 11260.276: 98.9679% ( 2) 00:09:23.727 11260.276 - 11319.855: 98.9894% ( 3) 00:09:23.727 11319.855 - 11379.433: 99.0109% ( 3) 00:09:23.727 11379.433 - 11439.011: 99.0252% ( 2) 00:09:23.727 11439.011 - 11498.589: 99.0539% ( 4) 00:09:23.727 11498.589 - 11558.167: 99.0682% ( 2) 00:09:23.727 11558.167 - 11617.745: 99.0826% ( 2) 00:09:23.727 26929.338 - 27048.495: 99.0969% ( 2) 00:09:23.727 27048.495 - 27167.651: 99.1256% ( 4) 00:09:23.727 27167.651 - 27286.807: 99.1614% ( 5) 00:09:23.727 27286.807 - 27405.964: 99.1972% ( 5) 00:09:23.727 27405.964 - 27525.120: 99.2331% ( 5) 00:09:23.727 27525.120 - 27644.276: 99.2689% ( 5) 00:09:23.727 27644.276 - 27763.433: 99.3048% ( 5) 00:09:23.727 27763.433 - 27882.589: 99.3406% ( 5) 00:09:23.727 27882.589 - 28001.745: 99.3764% ( 5) 00:09:23.727 28001.745 - 28120.902: 99.4194% ( 6) 00:09:23.727 28120.902 - 28240.058: 99.4481% ( 4) 00:09:23.727 28240.058 - 28359.215: 99.4839% ( 5) 00:09:23.727 28359.215 - 28478.371: 99.5198% ( 5) 00:09:23.727 28478.371 - 28597.527: 99.5413% ( 3) 00:09:23.727 33363.782 - 33602.095: 99.5986% ( 8) 00:09:23.727 33602.095 - 33840.407: 99.6703% ( 10) 00:09:23.727 33840.407 - 34078.720: 99.7420% ( 10) 00:09:23.727 34078.720 - 34317.033: 99.8136% ( 10) 00:09:23.727 34317.033 - 34555.345: 99.8925% ( 11) 00:09:23.727 34555.345 - 34793.658: 99.9642% ( 10) 00:09:23.727 34793.658 - 35031.971: 100.0000% ( 5) 00:09:23.727 00:09:23.727 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:23.727 ============================================================================== 00:09:23.727 Range in us Cumulative IO count 00:09:23.727 5123.724 - 5153.513: 0.0072% ( 1) 00:09:23.727 5153.513 - 5183.302: 0.0287% ( 3) 00:09:23.727 5183.302 - 5213.091: 0.0502% ( 3) 00:09:23.727 5213.091 - 5242.880: 0.0645% ( 2) 00:09:23.727 5242.880 - 5272.669: 0.0860% ( 3) 00:09:23.727 5272.669 - 5302.458: 0.0932% ( 1) 00:09:23.727 5302.458 - 5332.247: 0.1003% ( 1) 00:09:23.727 5332.247 - 5362.036: 0.1147% ( 2) 00:09:23.727 5362.036 - 5391.825: 0.1290% ( 2) 00:09:23.727 5391.825 - 5421.615: 0.1505% ( 3) 00:09:23.727 5421.615 - 5451.404: 0.1649% ( 2) 00:09:23.727 5451.404 - 5481.193: 0.1792% ( 2) 00:09:23.727 5481.193 - 5510.982: 0.2007% ( 3) 00:09:23.727 5510.982 - 5540.771: 0.2150% ( 2) 00:09:23.727 5540.771 - 5570.560: 0.2365% ( 3) 00:09:23.727 5570.560 - 5600.349: 0.2509% ( 2) 00:09:23.727 5600.349 - 5630.138: 0.2652% ( 2) 00:09:23.727 5630.138 - 5659.927: 0.2795% ( 2) 00:09:23.727 5659.927 - 5689.716: 0.2939% ( 2) 00:09:23.727 5689.716 - 5719.505: 0.3082% ( 2) 00:09:23.727 5719.505 - 5749.295: 0.3297% ( 3) 00:09:23.727 5749.295 - 5779.084: 0.3440% ( 2) 00:09:23.727 5779.084 - 5808.873: 0.3655% ( 3) 00:09:23.727 5808.873 - 5838.662: 0.3799% ( 2) 00:09:23.727 5838.662 - 5868.451: 0.4014% ( 3) 00:09:23.727 5868.451 - 5898.240: 0.4085% ( 1) 00:09:23.727 5898.240 - 5928.029: 0.4372% ( 4) 00:09:23.727 5928.029 - 5957.818: 0.4444% ( 1) 00:09:23.727 5957.818 - 5987.607: 0.4587% ( 2) 00:09:23.727 7864.320 - 7923.898: 0.4731% ( 2) 00:09:23.727 7923.898 - 7983.476: 0.5734% ( 14) 00:09:23.727 7983.476 - 8043.055: 0.8314% ( 36) 00:09:23.727 8043.055 - 8102.633: 1.4478% ( 86) 00:09:23.727 8102.633 - 8162.211: 2.5659% ( 156) 00:09:23.727 8162.211 - 8221.789: 4.3005% ( 242) 00:09:23.727 8221.789 - 8281.367: 6.6227% ( 324) 00:09:23.727 8281.367 - 8340.945: 9.5614% ( 410) 00:09:23.727 8340.945 - 8400.524: 13.0232% ( 483) 00:09:23.727 8400.524 - 8460.102: 16.8148% ( 529) 00:09:23.727 8460.102 - 8519.680: 20.6422% ( 534) 00:09:23.727 8519.680 - 8579.258: 24.7061% ( 567) 00:09:23.727 8579.258 - 8638.836: 28.7987% ( 571) 00:09:23.727 8638.836 - 8698.415: 32.9343% ( 577) 00:09:23.727 8698.415 - 8757.993: 37.0054% ( 568) 00:09:23.727 8757.993 - 8817.571: 41.0837% ( 569) 00:09:23.727 8817.571 - 8877.149: 45.2838% ( 586) 00:09:23.727 8877.149 - 8936.727: 49.5771% ( 599) 00:09:23.727 8936.727 - 8996.305: 53.7486% ( 582) 00:09:23.727 8996.305 - 9055.884: 57.7982% ( 565) 00:09:23.727 9055.884 - 9115.462: 62.0054% ( 587) 00:09:23.727 9115.462 - 9175.040: 66.1267% ( 575) 00:09:23.727 9175.040 - 9234.618: 70.0258% ( 544) 00:09:23.727 9234.618 - 9294.196: 73.5665% ( 494) 00:09:23.727 9294.196 - 9353.775: 76.7489% ( 444) 00:09:23.727 9353.775 - 9413.353: 79.3435% ( 362) 00:09:23.727 9413.353 - 9472.931: 81.7732% ( 339) 00:09:23.727 9472.931 - 9532.509: 83.7156% ( 271) 00:09:23.727 9532.509 - 9592.087: 85.4143% ( 237) 00:09:23.727 9592.087 - 9651.665: 86.8549% ( 201) 00:09:23.727 9651.665 - 9711.244: 88.1092% ( 175) 00:09:23.727 9711.244 - 9770.822: 88.9765% ( 121) 00:09:23.727 9770.822 - 9830.400: 89.8581% ( 123) 00:09:23.727 9830.400 - 9889.978: 90.6465% ( 110) 00:09:23.727 9889.978 - 9949.556: 91.4421% ( 111) 00:09:23.727 9949.556 - 10009.135: 92.1875% ( 104) 00:09:23.727 10009.135 - 10068.713: 92.9688% ( 109) 00:09:23.727 10068.713 - 10128.291: 93.6712% ( 98) 00:09:23.727 10128.291 - 10187.869: 94.3019% ( 88) 00:09:23.727 10187.869 - 10247.447: 94.9685% ( 93) 00:09:23.727 10247.447 - 10307.025: 95.5777% ( 85) 00:09:23.727 10307.025 - 10366.604: 96.1941% ( 86) 00:09:23.727 10366.604 - 10426.182: 96.7890% ( 83) 00:09:23.727 10426.182 - 10485.760: 97.3194% ( 74) 00:09:23.727 10485.760 - 10545.338: 97.7853% ( 65) 00:09:23.727 10545.338 - 10604.916: 98.1078% ( 45) 00:09:23.727 10604.916 - 10664.495: 98.4232% ( 44) 00:09:23.727 10664.495 - 10724.073: 98.5808% ( 22) 00:09:23.727 10724.073 - 10783.651: 98.7242% ( 20) 00:09:23.727 10783.651 - 10843.229: 98.8030% ( 11) 00:09:23.727 10843.229 - 10902.807: 98.8675% ( 9) 00:09:23.727 10902.807 - 10962.385: 98.8890% ( 3) 00:09:23.727 10962.385 - 11021.964: 98.9249% ( 5) 00:09:23.727 11081.542 - 11141.120: 98.9536% ( 4) 00:09:23.727 11141.120 - 11200.698: 98.9679% ( 2) 00:09:23.727 11200.698 - 11260.276: 98.9822% ( 2) 00:09:23.727 11260.276 - 11319.855: 99.0037% ( 3) 00:09:23.727 11319.855 - 11379.433: 99.0181% ( 2) 00:09:23.727 11379.433 - 11439.011: 99.0396% ( 3) 00:09:23.727 11439.011 - 11498.589: 99.0539% ( 2) 00:09:23.727 11498.589 - 11558.167: 99.0611% ( 1) 00:09:23.727 11558.167 - 11617.745: 99.0826% ( 3) 00:09:23.727 26095.244 - 26214.400: 99.0969% ( 2) 00:09:23.727 26214.400 - 26333.556: 99.1256% ( 4) 00:09:23.727 26333.556 - 26452.713: 99.1614% ( 5) 00:09:23.727 26452.713 - 26571.869: 99.1901% ( 4) 00:09:23.727 26571.869 - 26691.025: 99.2188% ( 4) 00:09:23.727 26691.025 - 26810.182: 99.2474% ( 4) 00:09:23.727 26810.182 - 26929.338: 99.2761% ( 4) 00:09:23.727 26929.338 - 27048.495: 99.3048% ( 4) 00:09:23.727 27048.495 - 27167.651: 99.3406% ( 5) 00:09:23.727 27167.651 - 27286.807: 99.3693% ( 4) 00:09:23.727 27286.807 - 27405.964: 99.3979% ( 4) 00:09:23.727 27405.964 - 27525.120: 99.4338% ( 5) 00:09:23.727 27525.120 - 27644.276: 99.4624% ( 4) 00:09:23.727 27644.276 - 27763.433: 99.4983% ( 5) 00:09:23.727 27763.433 - 27882.589: 99.5341% ( 5) 00:09:23.727 27882.589 - 28001.745: 99.5413% ( 1) 00:09:23.727 32410.531 - 32648.844: 99.5485% ( 1) 00:09:23.728 32648.844 - 32887.156: 99.5986% ( 7) 00:09:23.728 32887.156 - 33125.469: 99.6631% ( 9) 00:09:23.728 33125.469 - 33363.782: 99.7276% ( 9) 00:09:23.728 33363.782 - 33602.095: 99.7850% ( 8) 00:09:23.728 33602.095 - 33840.407: 99.8423% ( 8) 00:09:23.728 33840.407 - 34078.720: 99.9068% ( 9) 00:09:23.728 34078.720 - 34317.033: 99.9785% ( 10) 00:09:23.728 34317.033 - 34555.345: 100.0000% ( 3) 00:09:23.728 00:09:23.728 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:23.728 ============================================================================== 00:09:23.728 Range in us Cumulative IO count 00:09:23.728 4796.044 - 4825.833: 0.0215% ( 3) 00:09:23.728 4825.833 - 4855.622: 0.0430% ( 3) 00:09:23.728 4855.622 - 4885.411: 0.0645% ( 3) 00:09:23.728 4885.411 - 4915.200: 0.0788% ( 2) 00:09:23.728 4915.200 - 4944.989: 0.1003% ( 3) 00:09:23.728 4944.989 - 4974.778: 0.1218% ( 3) 00:09:23.728 4974.778 - 5004.567: 0.1433% ( 3) 00:09:23.728 5004.567 - 5034.356: 0.1649% ( 3) 00:09:23.728 5034.356 - 5064.145: 0.1720% ( 1) 00:09:23.728 5064.145 - 5093.935: 0.1935% ( 3) 00:09:23.728 5093.935 - 5123.724: 0.2150% ( 3) 00:09:23.728 5123.724 - 5153.513: 0.2365% ( 3) 00:09:23.728 5153.513 - 5183.302: 0.2509% ( 2) 00:09:23.728 5183.302 - 5213.091: 0.2652% ( 2) 00:09:23.728 5213.091 - 5242.880: 0.2867% ( 3) 00:09:23.728 5242.880 - 5272.669: 0.3082% ( 3) 00:09:23.728 5272.669 - 5302.458: 0.3297% ( 3) 00:09:23.728 5302.458 - 5332.247: 0.3512% ( 3) 00:09:23.728 5332.247 - 5362.036: 0.3727% ( 3) 00:09:23.728 5362.036 - 5391.825: 0.3942% ( 3) 00:09:23.728 5391.825 - 5421.615: 0.4157% ( 3) 00:09:23.728 5421.615 - 5451.404: 0.4372% ( 3) 00:09:23.728 5451.404 - 5481.193: 0.4515% ( 2) 00:09:23.728 5481.193 - 5510.982: 0.4587% ( 1) 00:09:23.728 7804.742 - 7864.320: 0.4946% ( 5) 00:09:23.728 7864.320 - 7923.898: 0.5232% ( 4) 00:09:23.728 7923.898 - 7983.476: 0.5806% ( 8) 00:09:23.728 7983.476 - 8043.055: 0.6881% ( 15) 00:09:23.728 8043.055 - 8102.633: 0.9246% ( 33) 00:09:23.728 8102.633 - 8162.211: 1.4550% ( 74) 00:09:23.728 8162.211 - 8221.789: 2.4656% ( 141) 00:09:23.728 8221.789 - 8281.367: 3.9779% ( 211) 00:09:23.728 8281.367 - 8340.945: 6.1783% ( 307) 00:09:23.728 8340.945 - 8400.524: 9.2603% ( 430) 00:09:23.728 8400.524 - 8460.102: 12.8225% ( 497) 00:09:23.728 8460.102 - 8519.680: 17.0155% ( 585) 00:09:23.728 8519.680 - 8579.258: 21.6456% ( 646) 00:09:23.728 8579.258 - 8638.836: 26.3976% ( 663) 00:09:23.728 8638.836 - 8698.415: 31.1927% ( 669) 00:09:23.728 8698.415 - 8757.993: 36.1884% ( 697) 00:09:23.728 8757.993 - 8817.571: 41.0765% ( 682) 00:09:23.728 8817.571 - 8877.149: 46.0221% ( 690) 00:09:23.728 8877.149 - 8936.727: 50.9533% ( 688) 00:09:23.728 8936.727 - 8996.305: 55.7913% ( 675) 00:09:23.728 8996.305 - 9055.884: 60.5218% ( 660) 00:09:23.728 9055.884 - 9115.462: 65.0803% ( 636) 00:09:23.728 9115.462 - 9175.040: 69.0654% ( 556) 00:09:23.728 9175.040 - 9234.618: 72.8283% ( 525) 00:09:23.728 9234.618 - 9294.196: 76.0034% ( 443) 00:09:23.728 9294.196 - 9353.775: 78.6626% ( 371) 00:09:23.728 9353.775 - 9413.353: 80.8773% ( 309) 00:09:23.728 9413.353 - 9472.931: 82.9200% ( 285) 00:09:23.728 9472.931 - 9532.509: 84.6115% ( 236) 00:09:23.728 9532.509 - 9592.087: 85.9877% ( 192) 00:09:23.728 9592.087 - 9651.665: 87.1416% ( 161) 00:09:23.728 9651.665 - 9711.244: 88.0949% ( 133) 00:09:23.728 9711.244 - 9770.822: 89.0338% ( 131) 00:09:23.728 9770.822 - 9830.400: 89.9513% ( 128) 00:09:23.728 9830.400 - 9889.978: 90.8615% ( 127) 00:09:23.728 9889.978 - 9949.556: 91.7360% ( 122) 00:09:23.728 9949.556 - 10009.135: 92.6032% ( 121) 00:09:23.728 10009.135 - 10068.713: 93.3845% ( 109) 00:09:23.728 10068.713 - 10128.291: 94.1872% ( 112) 00:09:23.728 10128.291 - 10187.869: 94.9326% ( 104) 00:09:23.728 10187.869 - 10247.447: 95.7067% ( 108) 00:09:23.728 10247.447 - 10307.025: 96.4163% ( 99) 00:09:23.728 10307.025 - 10366.604: 97.1187% ( 98) 00:09:23.728 10366.604 - 10426.182: 97.6849% ( 79) 00:09:23.728 10426.182 - 10485.760: 98.1078% ( 59) 00:09:23.728 10485.760 - 10545.338: 98.3873% ( 39) 00:09:23.728 10545.338 - 10604.916: 98.5880% ( 28) 00:09:23.728 10604.916 - 10664.495: 98.7529% ( 23) 00:09:23.728 10664.495 - 10724.073: 98.8317% ( 11) 00:09:23.728 10724.073 - 10783.651: 98.9034% ( 10) 00:09:23.728 10783.651 - 10843.229: 98.9392% ( 5) 00:09:23.728 10843.229 - 10902.807: 98.9607% ( 3) 00:09:23.728 10902.807 - 10962.385: 98.9822% ( 3) 00:09:23.728 10962.385 - 11021.964: 99.0037% ( 3) 00:09:23.728 11021.964 - 11081.542: 99.0181% ( 2) 00:09:23.728 11081.542 - 11141.120: 99.0396% ( 3) 00:09:23.728 11141.120 - 11200.698: 99.0611% ( 3) 00:09:23.728 11200.698 - 11260.276: 99.0826% ( 3) 00:09:23.728 25380.305 - 25499.462: 99.0969% ( 2) 00:09:23.728 25499.462 - 25618.618: 99.1327% ( 5) 00:09:23.728 25618.618 - 25737.775: 99.1686% ( 5) 00:09:23.728 25737.775 - 25856.931: 99.2044% ( 5) 00:09:23.728 25856.931 - 25976.087: 99.2403% ( 5) 00:09:23.728 25976.087 - 26095.244: 99.2761% ( 5) 00:09:23.728 26095.244 - 26214.400: 99.3119% ( 5) 00:09:23.728 26214.400 - 26333.556: 99.3478% ( 5) 00:09:23.728 26333.556 - 26452.713: 99.3836% ( 5) 00:09:23.728 26452.713 - 26571.869: 99.4194% ( 5) 00:09:23.728 26571.869 - 26691.025: 99.4481% ( 4) 00:09:23.728 26691.025 - 26810.182: 99.4839% ( 5) 00:09:23.728 26810.182 - 26929.338: 99.5198% ( 5) 00:09:23.728 26929.338 - 27048.495: 99.5413% ( 3) 00:09:23.728 31695.593 - 31933.905: 99.5485% ( 1) 00:09:23.728 31933.905 - 32172.218: 99.5986% ( 7) 00:09:23.728 32172.218 - 32410.531: 99.6560% ( 8) 00:09:23.728 32410.531 - 32648.844: 99.7276% ( 10) 00:09:23.728 32648.844 - 32887.156: 99.7921% ( 9) 00:09:23.728 32887.156 - 33125.469: 99.8638% ( 10) 00:09:23.728 33125.469 - 33363.782: 99.9283% ( 9) 00:09:23.728 33363.782 - 33602.095: 100.0000% ( 10) 00:09:23.728 00:09:23.728 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:23.728 ============================================================================== 00:09:23.728 Range in us Cumulative IO count 00:09:23.728 3932.160 - 3961.949: 0.0143% ( 2) 00:09:23.728 3961.949 - 3991.738: 0.0357% ( 3) 00:09:23.728 3991.738 - 4021.527: 0.0571% ( 3) 00:09:23.728 4021.527 - 4051.316: 0.0856% ( 4) 00:09:23.728 4051.316 - 4081.105: 0.1070% ( 3) 00:09:23.728 4081.105 - 4110.895: 0.1284% ( 3) 00:09:23.728 4110.895 - 4140.684: 0.1498% ( 3) 00:09:23.728 4140.684 - 4170.473: 0.1712% ( 3) 00:09:23.728 4170.473 - 4200.262: 0.1926% ( 3) 00:09:23.728 4200.262 - 4230.051: 0.2140% ( 3) 00:09:23.728 4230.051 - 4259.840: 0.2354% ( 3) 00:09:23.728 4259.840 - 4289.629: 0.2568% ( 3) 00:09:23.728 4289.629 - 4319.418: 0.2783% ( 3) 00:09:23.728 4319.418 - 4349.207: 0.2997% ( 3) 00:09:23.728 4349.207 - 4378.996: 0.3211% ( 3) 00:09:23.728 4378.996 - 4408.785: 0.3353% ( 2) 00:09:23.728 4408.785 - 4438.575: 0.3567% ( 3) 00:09:23.728 4438.575 - 4468.364: 0.3781% ( 3) 00:09:23.728 4468.364 - 4498.153: 0.3995% ( 3) 00:09:23.728 4498.153 - 4527.942: 0.4209% ( 3) 00:09:23.728 4527.942 - 4557.731: 0.4424% ( 3) 00:09:23.728 4557.731 - 4587.520: 0.4566% ( 2) 00:09:23.728 6940.858 - 6970.647: 0.4638% ( 1) 00:09:23.728 6970.647 - 7000.436: 0.4780% ( 2) 00:09:23.728 7000.436 - 7030.225: 0.4923% ( 2) 00:09:23.728 7030.225 - 7060.015: 0.5066% ( 2) 00:09:23.728 7060.015 - 7089.804: 0.5280% ( 3) 00:09:23.728 7089.804 - 7119.593: 0.5494% ( 3) 00:09:23.728 7119.593 - 7149.382: 0.5708% ( 3) 00:09:23.728 7149.382 - 7179.171: 0.5922% ( 3) 00:09:23.728 7179.171 - 7208.960: 0.6064% ( 2) 00:09:23.728 7208.960 - 7238.749: 0.6279% ( 3) 00:09:23.728 7238.749 - 7268.538: 0.6493% ( 3) 00:09:23.728 7268.538 - 7298.327: 0.6564% ( 1) 00:09:23.728 7298.327 - 7328.116: 0.6778% ( 3) 00:09:23.728 7328.116 - 7357.905: 0.6921% ( 2) 00:09:23.728 7357.905 - 7387.695: 0.7135% ( 3) 00:09:23.728 7387.695 - 7417.484: 0.7349% ( 3) 00:09:23.728 7417.484 - 7447.273: 0.7491% ( 2) 00:09:23.728 7447.273 - 7477.062: 0.7705% ( 3) 00:09:23.728 7477.062 - 7506.851: 0.7920% ( 3) 00:09:23.728 7506.851 - 7536.640: 0.8134% ( 3) 00:09:23.728 7536.640 - 7566.429: 0.8348% ( 3) 00:09:23.728 7566.429 - 7596.218: 0.8490% ( 2) 00:09:23.728 7596.218 - 7626.007: 0.8704% ( 3) 00:09:23.728 7626.007 - 7685.585: 0.9132% ( 6) 00:09:23.728 7983.476 - 8043.055: 0.9561% ( 6) 00:09:23.728 8043.055 - 8102.633: 1.1915% ( 33) 00:09:23.728 8102.633 - 8162.211: 1.6981% ( 71) 00:09:23.728 8162.211 - 8221.789: 2.7041% ( 141) 00:09:23.728 8221.789 - 8281.367: 4.2666% ( 219) 00:09:23.728 8281.367 - 8340.945: 6.6210% ( 330) 00:09:23.728 8340.945 - 8400.524: 9.5534% ( 411) 00:09:23.728 8400.524 - 8460.102: 13.1635% ( 506) 00:09:23.728 8460.102 - 8519.680: 17.2945% ( 579) 00:09:23.728 8519.680 - 8579.258: 21.7822% ( 629) 00:09:23.728 8579.258 - 8638.836: 26.5554% ( 669) 00:09:23.728 8638.836 - 8698.415: 31.3213% ( 668) 00:09:23.728 8698.415 - 8757.993: 36.0160% ( 658) 00:09:23.728 8757.993 - 8817.571: 40.9033% ( 685) 00:09:23.728 8817.571 - 8877.149: 45.7049% ( 673) 00:09:23.728 8877.149 - 8936.727: 50.6421% ( 692) 00:09:23.728 8936.727 - 8996.305: 55.5009% ( 681) 00:09:23.728 8996.305 - 9055.884: 60.1241% ( 648) 00:09:23.728 9055.884 - 9115.462: 64.4763% ( 610) 00:09:23.728 9115.462 - 9175.040: 68.5003% ( 564) 00:09:23.728 9175.040 - 9234.618: 72.2317% ( 523) 00:09:23.728 9234.618 - 9294.196: 75.3853% ( 442) 00:09:23.728 9294.196 - 9353.775: 78.1464% ( 387) 00:09:23.728 9353.775 - 9413.353: 80.4795% ( 327) 00:09:23.728 9413.353 - 9472.931: 82.3844% ( 267) 00:09:23.728 9472.931 - 9532.509: 84.0397% ( 232) 00:09:23.728 9532.509 - 9592.087: 85.5308% ( 209) 00:09:23.728 9592.087 - 9651.665: 86.8151% ( 180) 00:09:23.728 9651.665 - 9711.244: 87.8924% ( 151) 00:09:23.728 9711.244 - 9770.822: 88.8413% ( 133) 00:09:23.728 9770.822 - 9830.400: 89.7189% ( 123) 00:09:23.728 9830.400 - 9889.978: 90.5608% ( 118) 00:09:23.728 9889.978 - 9949.556: 91.4170% ( 120) 00:09:23.729 9949.556 - 10009.135: 92.2731% ( 120) 00:09:23.729 10009.135 - 10068.713: 93.0936% ( 115) 00:09:23.729 10068.713 - 10128.291: 93.8856% ( 111) 00:09:23.729 10128.291 - 10187.869: 94.6561% ( 108) 00:09:23.729 10187.869 - 10247.447: 95.3981% ( 104) 00:09:23.729 10247.447 - 10307.025: 96.0973% ( 98) 00:09:23.729 10307.025 - 10366.604: 96.6966% ( 84) 00:09:23.729 10366.604 - 10426.182: 97.2959% ( 84) 00:09:23.729 10426.182 - 10485.760: 97.6955% ( 56) 00:09:23.729 10485.760 - 10545.338: 97.9880% ( 41) 00:09:23.729 10545.338 - 10604.916: 98.1592% ( 24) 00:09:23.729 10604.916 - 10664.495: 98.3233% ( 23) 00:09:23.729 10664.495 - 10724.073: 98.4090% ( 12) 00:09:23.729 10724.073 - 10783.651: 98.4660% ( 8) 00:09:23.729 10783.651 - 10843.229: 98.5374% ( 10) 00:09:23.729 10843.229 - 10902.807: 98.6016% ( 9) 00:09:23.729 10902.807 - 10962.385: 98.6587% ( 8) 00:09:23.729 10962.385 - 11021.964: 98.6943% ( 5) 00:09:23.729 11021.964 - 11081.542: 98.7372% ( 6) 00:09:23.729 11081.542 - 11141.120: 98.7800% ( 6) 00:09:23.729 11141.120 - 11200.698: 98.8156% ( 5) 00:09:23.729 11200.698 - 11260.276: 98.8442% ( 4) 00:09:23.729 11260.276 - 11319.855: 98.8727% ( 4) 00:09:23.729 11319.855 - 11379.433: 98.8941% ( 3) 00:09:23.729 11379.433 - 11439.011: 98.9155% ( 3) 00:09:23.729 11439.011 - 11498.589: 98.9369% ( 3) 00:09:23.729 11498.589 - 11558.167: 98.9655% ( 4) 00:09:23.729 11558.167 - 11617.745: 98.9869% ( 3) 00:09:23.729 11617.745 - 11677.324: 99.0154% ( 4) 00:09:23.729 11677.324 - 11736.902: 99.0368% ( 3) 00:09:23.729 11736.902 - 11796.480: 99.0582% ( 3) 00:09:23.729 11796.480 - 11856.058: 99.0796% ( 3) 00:09:23.729 11856.058 - 11915.636: 99.0868% ( 1) 00:09:23.729 19422.487 - 19541.644: 99.0939% ( 1) 00:09:23.729 19541.644 - 19660.800: 99.1296% ( 5) 00:09:23.729 19660.800 - 19779.956: 99.1652% ( 5) 00:09:23.729 19779.956 - 19899.113: 99.2080% ( 6) 00:09:23.729 19899.113 - 20018.269: 99.2509% ( 6) 00:09:23.729 20018.269 - 20137.425: 99.2865% ( 5) 00:09:23.729 20137.425 - 20256.582: 99.3222% ( 5) 00:09:23.729 20256.582 - 20375.738: 99.3650% ( 6) 00:09:23.729 20375.738 - 20494.895: 99.4007% ( 5) 00:09:23.729 20494.895 - 20614.051: 99.4292% ( 4) 00:09:23.729 20614.051 - 20733.207: 99.4649% ( 5) 00:09:23.729 20733.207 - 20852.364: 99.5077% ( 6) 00:09:23.729 20852.364 - 20971.520: 99.5434% ( 5) 00:09:23.729 25856.931 - 25976.087: 99.5719% ( 4) 00:09:23.729 25976.087 - 26095.244: 99.6005% ( 4) 00:09:23.729 26095.244 - 26214.400: 99.6361% ( 5) 00:09:23.729 26214.400 - 26333.556: 99.6718% ( 5) 00:09:23.729 26333.556 - 26452.713: 99.7003% ( 4) 00:09:23.729 26452.713 - 26571.869: 99.7360% ( 5) 00:09:23.729 26571.869 - 26691.025: 99.7646% ( 4) 00:09:23.729 26691.025 - 26810.182: 99.8002% ( 5) 00:09:23.729 26810.182 - 26929.338: 99.8359% ( 5) 00:09:23.729 26929.338 - 27048.495: 99.8716% ( 5) 00:09:23.729 27048.495 - 27167.651: 99.9144% ( 6) 00:09:23.729 27167.651 - 27286.807: 99.9501% ( 5) 00:09:23.729 27286.807 - 27405.964: 99.9786% ( 4) 00:09:23.729 27405.964 - 27525.120: 100.0000% ( 3) 00:09:23.729 00:09:23.729 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:23.729 ============================================================================== 00:09:23.729 Range in us Cumulative IO count 00:09:23.729 3455.535 - 3470.429: 0.0071% ( 1) 00:09:23.729 3470.429 - 3485.324: 0.0143% ( 1) 00:09:23.729 3485.324 - 3500.218: 0.0285% ( 2) 00:09:23.729 3500.218 - 3515.113: 0.0357% ( 1) 00:09:23.729 3515.113 - 3530.007: 0.0428% ( 1) 00:09:23.729 3530.007 - 3544.902: 0.0571% ( 2) 00:09:23.729 3544.902 - 3559.796: 0.0642% ( 1) 00:09:23.729 3559.796 - 3574.691: 0.0856% ( 3) 00:09:23.729 3574.691 - 3589.585: 0.0928% ( 1) 00:09:23.729 3589.585 - 3604.480: 0.1070% ( 2) 00:09:23.729 3604.480 - 3619.375: 0.1142% ( 1) 00:09:23.729 3619.375 - 3634.269: 0.1284% ( 2) 00:09:23.729 3634.269 - 3649.164: 0.1356% ( 1) 00:09:23.729 3649.164 - 3664.058: 0.1427% ( 1) 00:09:23.729 3664.058 - 3678.953: 0.1570% ( 2) 00:09:23.729 3678.953 - 3693.847: 0.1641% ( 1) 00:09:23.729 3693.847 - 3708.742: 0.1784% ( 2) 00:09:23.729 3708.742 - 3723.636: 0.1855% ( 1) 00:09:23.729 3723.636 - 3738.531: 0.1926% ( 1) 00:09:23.729 3738.531 - 3753.425: 0.1998% ( 1) 00:09:23.729 3753.425 - 3768.320: 0.2069% ( 1) 00:09:23.729 3768.320 - 3783.215: 0.2212% ( 2) 00:09:23.729 3783.215 - 3798.109: 0.2354% ( 2) 00:09:23.729 3798.109 - 3813.004: 0.2426% ( 1) 00:09:23.729 3813.004 - 3842.793: 0.2640% ( 3) 00:09:23.729 3842.793 - 3872.582: 0.2854% ( 3) 00:09:23.729 3872.582 - 3902.371: 0.3068% ( 3) 00:09:23.729 3902.371 - 3932.160: 0.3282% ( 3) 00:09:23.729 3932.160 - 3961.949: 0.3496% ( 3) 00:09:23.729 3961.949 - 3991.738: 0.3639% ( 2) 00:09:23.729 3991.738 - 4021.527: 0.3853% ( 3) 00:09:23.729 4021.527 - 4051.316: 0.4067% ( 3) 00:09:23.729 4051.316 - 4081.105: 0.4281% ( 3) 00:09:23.729 4081.105 - 4110.895: 0.4495% ( 3) 00:09:23.729 4110.895 - 4140.684: 0.4566% ( 1) 00:09:23.729 6613.178 - 6642.967: 0.4709% ( 2) 00:09:23.729 6642.967 - 6672.756: 0.4923% ( 3) 00:09:23.729 6672.756 - 6702.545: 0.5066% ( 2) 00:09:23.729 6702.545 - 6732.335: 0.5280% ( 3) 00:09:23.729 6732.335 - 6762.124: 0.5351% ( 1) 00:09:23.729 6762.124 - 6791.913: 0.5565% ( 3) 00:09:23.729 6791.913 - 6821.702: 0.5708% ( 2) 00:09:23.729 6821.702 - 6851.491: 0.5922% ( 3) 00:09:23.729 6851.491 - 6881.280: 0.6136% ( 3) 00:09:23.729 6881.280 - 6911.069: 0.6350% ( 3) 00:09:23.729 6911.069 - 6940.858: 0.6564% ( 3) 00:09:23.729 6940.858 - 6970.647: 0.6707% ( 2) 00:09:23.729 6970.647 - 7000.436: 0.6849% ( 2) 00:09:23.729 7000.436 - 7030.225: 0.7063% ( 3) 00:09:23.729 7030.225 - 7060.015: 0.7277% ( 3) 00:09:23.729 7060.015 - 7089.804: 0.7491% ( 3) 00:09:23.729 7089.804 - 7119.593: 0.7634% ( 2) 00:09:23.729 7119.593 - 7149.382: 0.7848% ( 3) 00:09:23.729 7149.382 - 7179.171: 0.8062% ( 3) 00:09:23.729 7179.171 - 7208.960: 0.8276% ( 3) 00:09:23.729 7208.960 - 7238.749: 0.8490% ( 3) 00:09:23.729 7238.749 - 7268.538: 0.8704% ( 3) 00:09:23.729 7268.538 - 7298.327: 0.8918% ( 3) 00:09:23.729 7298.327 - 7328.116: 0.9132% ( 3) 00:09:23.729 7983.476 - 8043.055: 0.9561% ( 6) 00:09:23.729 8043.055 - 8102.633: 1.1273% ( 24) 00:09:23.729 8102.633 - 8162.211: 1.6053% ( 67) 00:09:23.729 8162.211 - 8221.789: 2.5257% ( 129) 00:09:23.729 8221.789 - 8281.367: 3.9669% ( 202) 00:09:23.729 8281.367 - 8340.945: 6.2357% ( 318) 00:09:23.729 8340.945 - 8400.524: 9.1966% ( 415) 00:09:23.729 8400.524 - 8460.102: 12.7925% ( 504) 00:09:23.729 8460.102 - 8519.680: 16.9449% ( 582) 00:09:23.729 8519.680 - 8579.258: 21.4683% ( 634) 00:09:23.729 8579.258 - 8638.836: 26.0916% ( 648) 00:09:23.729 8638.836 - 8698.415: 30.8933% ( 673) 00:09:23.729 8698.415 - 8757.993: 35.8447% ( 694) 00:09:23.729 8757.993 - 8817.571: 40.7677% ( 690) 00:09:23.729 8817.571 - 8877.149: 45.6978% ( 691) 00:09:23.729 8877.149 - 8936.727: 50.6635% ( 696) 00:09:23.729 8936.727 - 8996.305: 55.6364% ( 697) 00:09:23.729 8996.305 - 9055.884: 60.3382% ( 659) 00:09:23.729 9055.884 - 9115.462: 64.8616% ( 634) 00:09:23.729 9115.462 - 9175.040: 69.0140% ( 582) 00:09:23.729 9175.040 - 9234.618: 72.7882% ( 529) 00:09:23.729 9234.618 - 9294.196: 75.9917% ( 449) 00:09:23.729 9294.196 - 9353.775: 78.6886% ( 378) 00:09:23.729 9353.775 - 9413.353: 80.8861% ( 308) 00:09:23.729 9413.353 - 9472.931: 82.7055% ( 255) 00:09:23.729 9472.931 - 9532.509: 84.3179% ( 226) 00:09:23.729 9532.509 - 9592.087: 85.7591% ( 202) 00:09:23.729 9592.087 - 9651.665: 86.9364% ( 165) 00:09:23.729 9651.665 - 9711.244: 87.8425% ( 127) 00:09:23.729 9711.244 - 9770.822: 88.8128% ( 136) 00:09:23.729 9770.822 - 9830.400: 89.7546% ( 132) 00:09:23.729 9830.400 - 9889.978: 90.6607% ( 127) 00:09:23.729 9889.978 - 9949.556: 91.4954% ( 117) 00:09:23.729 9949.556 - 10009.135: 92.3373% ( 118) 00:09:23.729 10009.135 - 10068.713: 93.1436% ( 113) 00:09:23.729 10068.713 - 10128.291: 93.8927% ( 105) 00:09:23.729 10128.291 - 10187.869: 94.6490% ( 106) 00:09:23.729 10187.869 - 10247.447: 95.3910% ( 104) 00:09:23.729 10247.447 - 10307.025: 96.0902% ( 98) 00:09:23.729 10307.025 - 10366.604: 96.7466% ( 92) 00:09:23.729 10366.604 - 10426.182: 97.2888% ( 76) 00:09:23.729 10426.182 - 10485.760: 97.7383% ( 63) 00:09:23.729 10485.760 - 10545.338: 98.0594% ( 45) 00:09:23.729 10545.338 - 10604.916: 98.3305% ( 38) 00:09:23.729 10604.916 - 10664.495: 98.4660% ( 19) 00:09:23.729 10664.495 - 10724.073: 98.5659% ( 14) 00:09:23.729 10724.073 - 10783.651: 98.6301% ( 9) 00:09:23.729 10783.651 - 10843.229: 98.6943% ( 9) 00:09:23.729 10843.229 - 10902.807: 98.7372% ( 6) 00:09:23.729 10902.807 - 10962.385: 98.7800% ( 6) 00:09:23.729 10962.385 - 11021.964: 98.8156% ( 5) 00:09:23.729 11021.964 - 11081.542: 98.8442% ( 4) 00:09:23.729 11081.542 - 11141.120: 98.8727% ( 4) 00:09:23.729 11141.120 - 11200.698: 98.8941% ( 3) 00:09:23.729 11200.698 - 11260.276: 98.9155% ( 3) 00:09:23.729 11260.276 - 11319.855: 98.9369% ( 3) 00:09:23.729 11319.855 - 11379.433: 98.9583% ( 3) 00:09:23.729 11379.433 - 11439.011: 98.9869% ( 4) 00:09:23.729 11439.011 - 11498.589: 99.0083% ( 3) 00:09:23.729 11498.589 - 11558.167: 99.0297% ( 3) 00:09:23.729 11558.167 - 11617.745: 99.0582% ( 4) 00:09:23.729 11617.745 - 11677.324: 99.0725% ( 2) 00:09:23.729 11677.324 - 11736.902: 99.0868% ( 2) 00:09:23.729 18588.393 - 18707.549: 99.1153% ( 4) 00:09:23.729 18707.549 - 18826.705: 99.1510% ( 5) 00:09:23.729 18826.705 - 18945.862: 99.1938% ( 6) 00:09:23.729 18945.862 - 19065.018: 99.2295% ( 5) 00:09:23.729 19065.018 - 19184.175: 99.2651% ( 5) 00:09:23.729 19184.175 - 19303.331: 99.2937% ( 4) 00:09:23.729 19303.331 - 19422.487: 99.3293% ( 5) 00:09:23.729 19422.487 - 19541.644: 99.3579% ( 4) 00:09:23.729 19541.644 - 19660.800: 99.4007% ( 6) 00:09:23.729 19660.800 - 19779.956: 99.4292% ( 4) 00:09:23.729 19779.956 - 19899.113: 99.4649% ( 5) 00:09:23.729 19899.113 - 20018.269: 99.5006% ( 5) 00:09:23.729 20018.269 - 20137.425: 99.5362% ( 5) 00:09:23.730 20137.425 - 20256.582: 99.5434% ( 1) 00:09:23.730 24903.680 - 25022.836: 99.5576% ( 2) 00:09:23.730 25022.836 - 25141.993: 99.5862% ( 4) 00:09:23.730 25141.993 - 25261.149: 99.6219% ( 5) 00:09:23.730 25261.149 - 25380.305: 99.6504% ( 4) 00:09:23.730 25380.305 - 25499.462: 99.6861% ( 5) 00:09:23.730 25499.462 - 25618.618: 99.7146% ( 4) 00:09:23.730 25618.618 - 25737.775: 99.7503% ( 5) 00:09:23.730 25737.775 - 25856.931: 99.7860% ( 5) 00:09:23.730 25856.931 - 25976.087: 99.8074% ( 3) 00:09:23.730 25976.087 - 26095.244: 99.8359% ( 4) 00:09:23.730 26095.244 - 26214.400: 99.8716% ( 5) 00:09:23.730 26214.400 - 26333.556: 99.9072% ( 5) 00:09:23.730 26333.556 - 26452.713: 99.9429% ( 5) 00:09:23.730 26452.713 - 26571.869: 99.9786% ( 5) 00:09:23.730 26571.869 - 26691.025: 100.0000% ( 3) 00:09:23.730 00:09:23.730 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:23.730 ============================================================================== 00:09:23.730 Range in us Cumulative IO count 00:09:23.730 3008.698 - 3023.593: 0.0071% ( 1) 00:09:23.730 3038.487 - 3053.382: 0.0214% ( 2) 00:09:23.730 3053.382 - 3068.276: 0.0285% ( 1) 00:09:23.730 3068.276 - 3083.171: 0.0428% ( 2) 00:09:23.730 3083.171 - 3098.065: 0.0499% ( 1) 00:09:23.730 3098.065 - 3112.960: 0.0642% ( 2) 00:09:23.730 3112.960 - 3127.855: 0.0785% ( 2) 00:09:23.730 3127.855 - 3142.749: 0.0928% ( 2) 00:09:23.730 3142.749 - 3157.644: 0.0999% ( 1) 00:09:23.730 3157.644 - 3172.538: 0.1142% ( 2) 00:09:23.730 3172.538 - 3187.433: 0.1213% ( 1) 00:09:23.730 3187.433 - 3202.327: 0.1356% ( 2) 00:09:23.730 3202.327 - 3217.222: 0.1427% ( 1) 00:09:23.730 3217.222 - 3232.116: 0.1570% ( 2) 00:09:23.730 3232.116 - 3247.011: 0.1641% ( 1) 00:09:23.730 3247.011 - 3261.905: 0.1712% ( 1) 00:09:23.730 3261.905 - 3276.800: 0.1855% ( 2) 00:09:23.730 3276.800 - 3291.695: 0.1926% ( 1) 00:09:23.730 3291.695 - 3306.589: 0.2069% ( 2) 00:09:23.730 3306.589 - 3321.484: 0.2140% ( 1) 00:09:23.730 3321.484 - 3336.378: 0.2283% ( 2) 00:09:23.730 3336.378 - 3351.273: 0.2354% ( 1) 00:09:23.730 3351.273 - 3366.167: 0.2497% ( 2) 00:09:23.730 3366.167 - 3381.062: 0.2568% ( 1) 00:09:23.730 3381.062 - 3395.956: 0.2711% ( 2) 00:09:23.730 3395.956 - 3410.851: 0.2783% ( 1) 00:09:23.730 3410.851 - 3425.745: 0.2854% ( 1) 00:09:23.730 3425.745 - 3440.640: 0.2997% ( 2) 00:09:23.730 3440.640 - 3455.535: 0.3068% ( 1) 00:09:23.730 3455.535 - 3470.429: 0.3211% ( 2) 00:09:23.730 3470.429 - 3485.324: 0.3282% ( 1) 00:09:23.730 3485.324 - 3500.218: 0.3353% ( 1) 00:09:23.730 3500.218 - 3515.113: 0.3425% ( 1) 00:09:23.730 3530.007 - 3544.902: 0.3567% ( 2) 00:09:23.730 3544.902 - 3559.796: 0.3639% ( 1) 00:09:23.730 3559.796 - 3574.691: 0.3710% ( 1) 00:09:23.730 3574.691 - 3589.585: 0.3781% ( 1) 00:09:23.730 3589.585 - 3604.480: 0.3853% ( 1) 00:09:23.730 3604.480 - 3619.375: 0.3924% ( 1) 00:09:23.730 3619.375 - 3634.269: 0.3995% ( 1) 00:09:23.730 3634.269 - 3649.164: 0.4067% ( 1) 00:09:23.730 3649.164 - 3664.058: 0.4209% ( 2) 00:09:23.730 3664.058 - 3678.953: 0.4281% ( 1) 00:09:23.730 3678.953 - 3693.847: 0.4424% ( 2) 00:09:23.730 3693.847 - 3708.742: 0.4495% ( 1) 00:09:23.730 3708.742 - 3723.636: 0.4566% ( 1) 00:09:23.730 6136.553 - 6166.342: 0.4638% ( 1) 00:09:23.730 6166.342 - 6196.131: 0.4780% ( 2) 00:09:23.730 6196.131 - 6225.920: 0.4923% ( 2) 00:09:23.730 6225.920 - 6255.709: 0.5137% ( 3) 00:09:23.730 6255.709 - 6285.498: 0.5422% ( 4) 00:09:23.730 6285.498 - 6315.287: 0.5636% ( 3) 00:09:23.730 6315.287 - 6345.076: 0.5850% ( 3) 00:09:23.730 6345.076 - 6374.865: 0.6064% ( 3) 00:09:23.730 6374.865 - 6404.655: 0.6279% ( 3) 00:09:23.730 6404.655 - 6434.444: 0.6493% ( 3) 00:09:23.730 6434.444 - 6464.233: 0.6635% ( 2) 00:09:23.730 6464.233 - 6494.022: 0.6849% ( 3) 00:09:23.730 6494.022 - 6523.811: 0.6992% ( 2) 00:09:23.730 6523.811 - 6553.600: 0.7135% ( 2) 00:09:23.730 6553.600 - 6583.389: 0.7349% ( 3) 00:09:23.730 6583.389 - 6613.178: 0.7563% ( 3) 00:09:23.730 6613.178 - 6642.967: 0.7777% ( 3) 00:09:23.730 6642.967 - 6672.756: 0.7991% ( 3) 00:09:23.730 6672.756 - 6702.545: 0.8062% ( 1) 00:09:23.730 6702.545 - 6732.335: 0.8276% ( 3) 00:09:23.730 6732.335 - 6762.124: 0.8490% ( 3) 00:09:23.730 6762.124 - 6791.913: 0.8633% ( 2) 00:09:23.730 6791.913 - 6821.702: 0.8847% ( 3) 00:09:23.730 6821.702 - 6851.491: 0.9061% ( 3) 00:09:23.730 6851.491 - 6881.280: 0.9132% ( 1) 00:09:23.730 7923.898 - 7983.476: 0.9346% ( 3) 00:09:23.730 7983.476 - 8043.055: 0.9703% ( 5) 00:09:23.730 8043.055 - 8102.633: 1.0773% ( 15) 00:09:23.730 8102.633 - 8162.211: 1.4483% ( 52) 00:09:23.730 8162.211 - 8221.789: 2.2974% ( 119) 00:09:23.730 8221.789 - 8281.367: 3.8242% ( 214) 00:09:23.730 8281.367 - 8340.945: 6.0288% ( 309) 00:09:23.730 8340.945 - 8400.524: 9.0183% ( 419) 00:09:23.730 8400.524 - 8460.102: 12.8139% ( 532) 00:09:23.730 8460.102 - 8519.680: 16.8878% ( 571) 00:09:23.730 8519.680 - 8579.258: 21.3185% ( 621) 00:09:23.730 8579.258 - 8638.836: 26.0060% ( 657) 00:09:23.730 8638.836 - 8698.415: 30.7648% ( 667) 00:09:23.730 8698.415 - 8757.993: 35.7591% ( 700) 00:09:23.730 8757.993 - 8817.571: 40.7320% ( 697) 00:09:23.730 8817.571 - 8877.149: 45.7192% ( 699) 00:09:23.730 8877.149 - 8936.727: 50.6778% ( 695) 00:09:23.730 8936.727 - 8996.305: 55.6650% ( 699) 00:09:23.730 8996.305 - 9055.884: 60.4880% ( 676) 00:09:23.730 9055.884 - 9115.462: 65.1184% ( 649) 00:09:23.730 9115.462 - 9175.040: 69.1852% ( 570) 00:09:23.730 9175.040 - 9234.618: 72.8881% ( 519) 00:09:23.730 9234.618 - 9294.196: 76.0274% ( 440) 00:09:23.730 9294.196 - 9353.775: 78.7671% ( 384) 00:09:23.730 9353.775 - 9413.353: 80.9075% ( 300) 00:09:23.730 9413.353 - 9472.931: 82.8196% ( 268) 00:09:23.730 9472.931 - 9532.509: 84.4392% ( 227) 00:09:23.730 9532.509 - 9592.087: 85.8876% ( 203) 00:09:23.730 9592.087 - 9651.665: 87.0291% ( 160) 00:09:23.730 9651.665 - 9711.244: 87.9923% ( 135) 00:09:23.730 9711.244 - 9770.822: 88.9198% ( 130) 00:09:23.730 9770.822 - 9830.400: 89.8616% ( 132) 00:09:23.730 9830.400 - 9889.978: 90.7392% ( 123) 00:09:23.730 9889.978 - 9949.556: 91.6453% ( 127) 00:09:23.730 9949.556 - 10009.135: 92.4729% ( 116) 00:09:23.730 10009.135 - 10068.713: 93.2577% ( 110) 00:09:23.730 10068.713 - 10128.291: 93.9854% ( 102) 00:09:23.730 10128.291 - 10187.869: 94.7560% ( 108) 00:09:23.730 10187.869 - 10247.447: 95.4980% ( 104) 00:09:23.730 10247.447 - 10307.025: 96.1473% ( 91) 00:09:23.730 10307.025 - 10366.604: 96.8322% ( 96) 00:09:23.730 10366.604 - 10426.182: 97.4030% ( 80) 00:09:23.730 10426.182 - 10485.760: 97.8953% ( 69) 00:09:23.730 10485.760 - 10545.338: 98.1735% ( 39) 00:09:23.730 10545.338 - 10604.916: 98.4161% ( 34) 00:09:23.730 10604.916 - 10664.495: 98.5445% ( 18) 00:09:23.730 10664.495 - 10724.073: 98.6373% ( 13) 00:09:23.730 10724.073 - 10783.651: 98.7086% ( 10) 00:09:23.730 10783.651 - 10843.229: 98.7728% ( 9) 00:09:23.730 10843.229 - 10902.807: 98.8085% ( 5) 00:09:23.730 10902.807 - 10962.385: 98.8442% ( 5) 00:09:23.730 10962.385 - 11021.964: 98.8727% ( 4) 00:09:23.730 11021.964 - 11081.542: 98.8941% ( 3) 00:09:23.730 11081.542 - 11141.120: 98.9155% ( 3) 00:09:23.730 11141.120 - 11200.698: 98.9298% ( 2) 00:09:23.730 11200.698 - 11260.276: 98.9441% ( 2) 00:09:23.730 11260.276 - 11319.855: 98.9726% ( 4) 00:09:23.730 11319.855 - 11379.433: 98.9869% ( 2) 00:09:23.730 11379.433 - 11439.011: 99.0083% ( 3) 00:09:23.730 11439.011 - 11498.589: 99.0297% ( 3) 00:09:23.730 11498.589 - 11558.167: 99.0511% ( 3) 00:09:23.730 11558.167 - 11617.745: 99.0654% ( 2) 00:09:23.730 11617.745 - 11677.324: 99.0868% ( 3) 00:09:23.730 17635.142 - 17754.298: 99.1082% ( 3) 00:09:23.730 17754.298 - 17873.455: 99.1367% ( 4) 00:09:23.730 17873.455 - 17992.611: 99.1724% ( 5) 00:09:23.730 17992.611 - 18111.767: 99.2080% ( 5) 00:09:23.730 18111.767 - 18230.924: 99.2366% ( 4) 00:09:23.730 18230.924 - 18350.080: 99.2723% ( 5) 00:09:23.730 18350.080 - 18469.236: 99.3079% ( 5) 00:09:23.730 18469.236 - 18588.393: 99.3436% ( 5) 00:09:23.730 18588.393 - 18707.549: 99.3721% ( 4) 00:09:23.730 18707.549 - 18826.705: 99.4078% ( 5) 00:09:23.730 18826.705 - 18945.862: 99.4435% ( 5) 00:09:23.731 18945.862 - 19065.018: 99.4792% ( 5) 00:09:23.731 19065.018 - 19184.175: 99.5148% ( 5) 00:09:23.731 19184.175 - 19303.331: 99.5362% ( 3) 00:09:23.731 19303.331 - 19422.487: 99.5434% ( 1) 00:09:23.731 24069.585 - 24188.742: 99.5648% ( 3) 00:09:23.731 24188.742 - 24307.898: 99.5862% ( 3) 00:09:23.731 24307.898 - 24427.055: 99.6219% ( 5) 00:09:23.731 24427.055 - 24546.211: 99.6575% ( 5) 00:09:23.731 24546.211 - 24665.367: 99.6861% ( 4) 00:09:23.731 24665.367 - 24784.524: 99.7217% ( 5) 00:09:23.731 24784.524 - 24903.680: 99.7503% ( 4) 00:09:23.731 24903.680 - 25022.836: 99.7860% ( 5) 00:09:23.731 25022.836 - 25141.993: 99.8216% ( 5) 00:09:23.731 25141.993 - 25261.149: 99.8573% ( 5) 00:09:23.731 25261.149 - 25380.305: 99.9001% ( 6) 00:09:23.731 25380.305 - 25499.462: 99.9287% ( 4) 00:09:23.731 25499.462 - 25618.618: 99.9643% ( 5) 00:09:23.731 25618.618 - 25737.775: 99.9929% ( 4) 00:09:23.731 25737.775 - 25856.931: 100.0000% ( 1) 00:09:23.731 00:09:23.731 18:16:09 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:25.106 Initializing NVMe Controllers 00:09:25.106 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:25.106 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:25.106 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:25.106 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:25.106 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:25.106 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:25.106 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:25.106 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:25.106 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:25.106 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:25.106 Initialization complete. Launching workers. 00:09:25.106 ======================================================== 00:09:25.106 Latency(us) 00:09:25.106 Device Information : IOPS MiB/s Average min max 00:09:25.106 PCIE (0000:00:13.0) NSID 1 from core 0: 11754.43 137.75 10895.46 7051.48 25762.66 00:09:25.106 PCIE (0000:00:10.0) NSID 1 from core 0: 11754.43 137.75 10885.67 6180.26 25363.92 00:09:25.106 PCIE (0000:00:11.0) NSID 1 from core 0: 11754.43 137.75 10875.20 5619.61 24492.88 00:09:25.106 PCIE (0000:00:12.0) NSID 1 from core 0: 11754.43 137.75 10864.43 4687.43 24471.01 00:09:25.106 PCIE (0000:00:12.0) NSID 2 from core 0: 11754.43 137.75 10854.20 4348.16 23988.08 00:09:25.106 PCIE (0000:00:12.0) NSID 3 from core 0: 11754.43 137.75 10843.86 4008.01 23430.22 00:09:25.106 ======================================================== 00:09:25.106 Total : 70526.58 826.48 10869.80 4008.01 25762.66 00:09:25.106 00:09:25.106 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:25.106 ================================================================================= 00:09:25.106 1.00000% : 9175.040us 00:09:25.106 10.00000% : 9889.978us 00:09:25.106 25.00000% : 10187.869us 00:09:25.106 50.00000% : 10664.495us 00:09:25.106 75.00000% : 11260.276us 00:09:25.106 90.00000% : 12153.949us 00:09:25.106 95.00000% : 12690.153us 00:09:25.106 98.00000% : 14179.607us 00:09:25.106 99.00000% : 17992.611us 00:09:25.106 99.50000% : 24665.367us 00:09:25.106 99.90000% : 25618.618us 00:09:25.106 99.99000% : 25737.775us 00:09:25.106 99.99900% : 25856.931us 00:09:25.106 99.99990% : 25856.931us 00:09:25.106 99.99999% : 25856.931us 00:09:25.106 00:09:25.106 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:25.106 ================================================================================= 00:09:25.106 1.00000% : 8996.305us 00:09:25.106 10.00000% : 9770.822us 00:09:25.106 25.00000% : 10128.291us 00:09:25.106 50.00000% : 10664.495us 00:09:25.106 75.00000% : 11319.855us 00:09:25.106 90.00000% : 12153.949us 00:09:25.106 95.00000% : 12749.731us 00:09:25.106 98.00000% : 14775.389us 00:09:25.106 99.00000% : 18707.549us 00:09:25.106 99.50000% : 24188.742us 00:09:25.106 99.90000% : 25141.993us 00:09:25.106 99.99000% : 25380.305us 00:09:25.106 99.99900% : 25380.305us 00:09:25.106 99.99990% : 25380.305us 00:09:25.106 99.99999% : 25380.305us 00:09:25.106 00:09:25.106 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:25.106 ================================================================================= 00:09:25.106 1.00000% : 9175.040us 00:09:25.106 10.00000% : 9889.978us 00:09:25.106 25.00000% : 10187.869us 00:09:25.106 50.00000% : 10664.495us 00:09:25.106 75.00000% : 11260.276us 00:09:25.106 90.00000% : 12094.371us 00:09:25.106 95.00000% : 12690.153us 00:09:25.106 98.00000% : 14596.655us 00:09:25.106 99.00000% : 18469.236us 00:09:25.106 99.50000% : 23116.335us 00:09:25.106 99.90000% : 24307.898us 00:09:25.106 99.99000% : 24546.211us 00:09:25.106 99.99900% : 24546.211us 00:09:25.106 99.99990% : 24546.211us 00:09:25.106 99.99999% : 24546.211us 00:09:25.106 00:09:25.106 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:25.106 ================================================================================= 00:09:25.106 1.00000% : 9115.462us 00:09:25.106 10.00000% : 9830.400us 00:09:25.106 25.00000% : 10187.869us 00:09:25.106 50.00000% : 10604.916us 00:09:25.106 75.00000% : 11260.276us 00:09:25.106 90.00000% : 12153.949us 00:09:25.106 95.00000% : 12690.153us 00:09:25.106 98.00000% : 14596.655us 00:09:25.106 99.00000% : 18350.080us 00:09:25.106 99.50000% : 22878.022us 00:09:25.106 99.90000% : 24307.898us 00:09:25.106 99.99000% : 24546.211us 00:09:25.106 99.99900% : 24546.211us 00:09:25.106 99.99990% : 24546.211us 00:09:25.106 99.99999% : 24546.211us 00:09:25.106 00:09:25.106 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:25.106 ================================================================================= 00:09:25.106 1.00000% : 8877.149us 00:09:25.106 10.00000% : 9889.978us 00:09:25.106 25.00000% : 10187.869us 00:09:25.106 50.00000% : 10604.916us 00:09:25.106 75.00000% : 11260.276us 00:09:25.106 90.00000% : 12153.949us 00:09:25.106 95.00000% : 12690.153us 00:09:25.106 98.00000% : 14596.655us 00:09:25.106 99.00000% : 17635.142us 00:09:25.106 99.50000% : 22878.022us 00:09:25.106 99.90000% : 23831.273us 00:09:25.107 99.99000% : 24069.585us 00:09:25.107 99.99900% : 24069.585us 00:09:25.107 99.99990% : 24069.585us 00:09:25.107 99.99999% : 24069.585us 00:09:25.107 00:09:25.107 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:25.107 ================================================================================= 00:09:25.107 1.00000% : 8340.945us 00:09:25.107 10.00000% : 9889.978us 00:09:25.107 25.00000% : 10187.869us 00:09:25.107 50.00000% : 10604.916us 00:09:25.107 75.00000% : 11319.855us 00:09:25.107 90.00000% : 12153.949us 00:09:25.107 95.00000% : 12630.575us 00:09:25.107 98.00000% : 14358.342us 00:09:25.107 99.00000% : 17039.360us 00:09:25.107 99.50000% : 22401.396us 00:09:25.107 99.90000% : 23235.491us 00:09:25.107 99.99000% : 23473.804us 00:09:25.107 99.99900% : 23473.804us 00:09:25.107 99.99990% : 23473.804us 00:09:25.107 99.99999% : 23473.804us 00:09:25.107 00:09:25.107 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:25.107 ============================================================================== 00:09:25.107 Range in us Cumulative IO count 00:09:25.107 7030.225 - 7060.015: 0.0934% ( 11) 00:09:25.107 7060.015 - 7089.804: 0.2038% ( 13) 00:09:25.107 7089.804 - 7119.593: 0.2123% ( 1) 00:09:25.107 7119.593 - 7149.382: 0.2293% ( 2) 00:09:25.107 7149.382 - 7179.171: 0.2378% ( 1) 00:09:25.107 7179.171 - 7208.960: 0.2548% ( 2) 00:09:25.107 7208.960 - 7238.749: 0.2632% ( 1) 00:09:25.107 7238.749 - 7268.538: 0.2802% ( 2) 00:09:25.107 7268.538 - 7298.327: 0.2887% ( 1) 00:09:25.107 7298.327 - 7328.116: 0.2972% ( 1) 00:09:25.107 7328.116 - 7357.905: 0.3142% ( 2) 00:09:25.107 7357.905 - 7387.695: 0.3227% ( 1) 00:09:25.107 7387.695 - 7417.484: 0.3312% ( 1) 00:09:25.107 7417.484 - 7447.273: 0.3397% ( 1) 00:09:25.107 7447.273 - 7477.062: 0.3482% ( 1) 00:09:25.107 7477.062 - 7506.851: 0.3651% ( 2) 00:09:25.107 7506.851 - 7536.640: 0.3821% ( 2) 00:09:25.107 7536.640 - 7566.429: 0.3906% ( 1) 00:09:25.107 7566.429 - 7596.218: 0.4076% ( 2) 00:09:25.107 7596.218 - 7626.007: 0.4161% ( 1) 00:09:25.107 7626.007 - 7685.585: 0.4416% ( 3) 00:09:25.107 7685.585 - 7745.164: 0.4671% ( 3) 00:09:25.107 7745.164 - 7804.742: 0.4840% ( 2) 00:09:25.107 7804.742 - 7864.320: 0.5095% ( 3) 00:09:25.107 7864.320 - 7923.898: 0.5265% ( 2) 00:09:25.107 7923.898 - 7983.476: 0.5435% ( 2) 00:09:25.107 8996.305 - 9055.884: 0.5859% ( 5) 00:09:25.107 9055.884 - 9115.462: 0.7388% ( 18) 00:09:25.107 9115.462 - 9175.040: 1.0105% ( 32) 00:09:25.107 9175.040 - 9234.618: 1.2653% ( 30) 00:09:25.107 9234.618 - 9294.196: 1.4776% ( 25) 00:09:25.107 9294.196 - 9353.775: 1.7069% ( 27) 00:09:25.107 9353.775 - 9413.353: 2.0890% ( 45) 00:09:25.107 9413.353 - 9472.931: 2.5645% ( 56) 00:09:25.107 9472.931 - 9532.509: 3.0910% ( 62) 00:09:25.107 9532.509 - 9592.087: 3.8043% ( 84) 00:09:25.107 9592.087 - 9651.665: 4.6790% ( 103) 00:09:25.107 9651.665 - 9711.244: 5.7660% ( 128) 00:09:25.107 9711.244 - 9770.822: 6.9378% ( 138) 00:09:25.107 9770.822 - 9830.400: 8.7041% ( 208) 00:09:25.107 9830.400 - 9889.978: 10.8441% ( 252) 00:09:25.107 9889.978 - 9949.556: 13.5190% ( 315) 00:09:25.107 9949.556 - 10009.135: 16.5761% ( 360) 00:09:25.107 10009.135 - 10068.713: 20.0323% ( 407) 00:09:25.107 10068.713 - 10128.291: 23.5649% ( 416) 00:09:25.107 10128.291 - 10187.869: 26.8937% ( 392) 00:09:25.107 10187.869 - 10247.447: 30.1630% ( 385) 00:09:25.107 10247.447 - 10307.025: 33.2201% ( 360) 00:09:25.107 10307.025 - 10366.604: 36.2942% ( 362) 00:09:25.107 10366.604 - 10426.182: 39.4446% ( 371) 00:09:25.107 10426.182 - 10485.760: 42.7480% ( 389) 00:09:25.107 10485.760 - 10545.338: 46.2381% ( 411) 00:09:25.107 10545.338 - 10604.916: 49.6349% ( 400) 00:09:25.107 10604.916 - 10664.495: 52.8278% ( 376) 00:09:25.107 10664.495 - 10724.073: 56.1651% ( 393) 00:09:25.107 10724.073 - 10783.651: 59.1797% ( 355) 00:09:25.107 10783.651 - 10843.229: 62.0414% ( 337) 00:09:25.107 10843.229 - 10902.807: 64.6654% ( 309) 00:09:25.107 10902.807 - 10962.385: 66.9837% ( 273) 00:09:25.107 10962.385 - 11021.964: 69.0217% ( 240) 00:09:25.107 11021.964 - 11081.542: 70.8135% ( 211) 00:09:25.107 11081.542 - 11141.120: 72.6223% ( 213) 00:09:25.107 11141.120 - 11200.698: 74.2272% ( 189) 00:09:25.107 11200.698 - 11260.276: 75.7728% ( 182) 00:09:25.107 11260.276 - 11319.855: 77.1399% ( 161) 00:09:25.107 11319.855 - 11379.433: 78.4137% ( 150) 00:09:25.107 11379.433 - 11439.011: 79.6281% ( 143) 00:09:25.107 11439.011 - 11498.589: 80.8254% ( 141) 00:09:25.107 11498.589 - 11558.167: 81.8444% ( 120) 00:09:25.107 11558.167 - 11617.745: 82.8889% ( 123) 00:09:25.107 11617.745 - 11677.324: 83.7296% ( 99) 00:09:25.107 11677.324 - 11736.902: 84.6382% ( 107) 00:09:25.107 11736.902 - 11796.480: 85.5808% ( 111) 00:09:25.107 11796.480 - 11856.058: 86.4215% ( 99) 00:09:25.107 11856.058 - 11915.636: 87.2622% ( 99) 00:09:25.107 11915.636 - 11975.215: 88.0774% ( 96) 00:09:25.107 11975.215 - 12034.793: 88.8332% ( 89) 00:09:25.107 12034.793 - 12094.371: 89.5635% ( 86) 00:09:25.107 12094.371 - 12153.949: 90.1579% ( 70) 00:09:25.107 12153.949 - 12213.527: 90.7354% ( 68) 00:09:25.107 12213.527 - 12273.105: 91.3128% ( 68) 00:09:25.107 12273.105 - 12332.684: 91.9497% ( 75) 00:09:25.107 12332.684 - 12392.262: 92.4762% ( 62) 00:09:25.107 12392.262 - 12451.840: 93.1216% ( 76) 00:09:25.107 12451.840 - 12511.418: 93.8179% ( 82) 00:09:25.107 12511.418 - 12570.996: 94.3444% ( 62) 00:09:25.107 12570.996 - 12630.575: 94.8200% ( 56) 00:09:25.107 12630.575 - 12690.153: 95.2021% ( 45) 00:09:25.107 12690.153 - 12749.731: 95.6182% ( 49) 00:09:25.107 12749.731 - 12809.309: 96.0003% ( 45) 00:09:25.107 12809.309 - 12868.887: 96.3400% ( 40) 00:09:25.107 12868.887 - 12928.465: 96.6202% ( 33) 00:09:25.107 12928.465 - 12988.044: 96.8410% ( 26) 00:09:25.107 12988.044 - 13047.622: 96.9854% ( 17) 00:09:25.107 13047.622 - 13107.200: 97.1213% ( 16) 00:09:25.107 13107.200 - 13166.778: 97.2571% ( 16) 00:09:25.107 13166.778 - 13226.356: 97.3930% ( 16) 00:09:25.107 13226.356 - 13285.935: 97.4864% ( 11) 00:09:25.107 13285.935 - 13345.513: 97.5883% ( 12) 00:09:25.107 13345.513 - 13405.091: 97.6902% ( 12) 00:09:25.107 13405.091 - 13464.669: 97.7582% ( 8) 00:09:25.107 13464.669 - 13524.247: 97.8091% ( 6) 00:09:25.107 13524.247 - 13583.825: 97.8261% ( 2) 00:09:25.107 13941.295 - 14000.873: 97.8855% ( 7) 00:09:25.107 14000.873 - 14060.451: 97.9365% ( 6) 00:09:25.107 14060.451 - 14120.029: 97.9959% ( 7) 00:09:25.107 14120.029 - 14179.607: 98.0214% ( 3) 00:09:25.107 14179.607 - 14239.185: 98.0384% ( 2) 00:09:25.107 14239.185 - 14298.764: 98.0554% ( 2) 00:09:25.107 14298.764 - 14358.342: 98.0724% ( 2) 00:09:25.107 14358.342 - 14417.920: 98.0893% ( 2) 00:09:25.107 14417.920 - 14477.498: 98.1148% ( 3) 00:09:25.107 14477.498 - 14537.076: 98.1488% ( 4) 00:09:25.107 14537.076 - 14596.655: 98.1743% ( 3) 00:09:25.107 14596.655 - 14656.233: 98.2592% ( 10) 00:09:25.107 14656.233 - 14715.811: 98.3271% ( 8) 00:09:25.107 14715.811 - 14775.389: 98.3696% ( 5) 00:09:25.107 14775.389 - 14834.967: 98.4290% ( 7) 00:09:25.107 14834.967 - 14894.545: 98.4800% ( 6) 00:09:25.107 14894.545 - 14954.124: 98.5309% ( 6) 00:09:25.107 14954.124 - 15013.702: 98.5819% ( 6) 00:09:25.107 15013.702 - 15073.280: 98.6243% ( 5) 00:09:25.107 15073.280 - 15132.858: 98.6498% ( 3) 00:09:25.107 15132.858 - 15192.436: 98.6753% ( 3) 00:09:25.107 15192.436 - 15252.015: 98.7007% ( 3) 00:09:25.107 15252.015 - 15371.171: 98.7517% ( 6) 00:09:25.107 15371.171 - 15490.327: 98.8111% ( 7) 00:09:25.107 15490.327 - 15609.484: 98.8621% ( 6) 00:09:25.107 15609.484 - 15728.640: 98.9130% ( 6) 00:09:25.107 17635.142 - 17754.298: 98.9300% ( 2) 00:09:25.107 17754.298 - 17873.455: 98.9725% ( 5) 00:09:25.107 17873.455 - 17992.611: 99.0149% ( 5) 00:09:25.107 17992.611 - 18111.767: 99.0659% ( 6) 00:09:25.107 18111.767 - 18230.924: 99.1168% ( 6) 00:09:25.107 18230.924 - 18350.080: 99.1678% ( 6) 00:09:25.107 18350.080 - 18469.236: 99.2103% ( 5) 00:09:25.107 18469.236 - 18588.393: 99.2527% ( 5) 00:09:25.107 18588.393 - 18707.549: 99.3037% ( 6) 00:09:25.107 18707.549 - 18826.705: 99.3461% ( 5) 00:09:25.107 18826.705 - 18945.862: 99.3971% ( 6) 00:09:25.107 18945.862 - 19065.018: 99.4395% ( 5) 00:09:25.107 19065.018 - 19184.175: 99.4565% ( 2) 00:09:25.107 24427.055 - 24546.211: 99.4820% ( 3) 00:09:25.107 24546.211 - 24665.367: 99.5414% ( 7) 00:09:25.107 24665.367 - 24784.524: 99.5839% ( 5) 00:09:25.107 24784.524 - 24903.680: 99.6349% ( 6) 00:09:25.107 24903.680 - 25022.836: 99.6858% ( 6) 00:09:25.107 25022.836 - 25141.993: 99.7283% ( 5) 00:09:25.107 25141.993 - 25261.149: 99.7877% ( 7) 00:09:25.107 25261.149 - 25380.305: 99.8387% ( 6) 00:09:25.107 25380.305 - 25499.462: 99.8981% ( 7) 00:09:25.107 25499.462 - 25618.618: 99.9490% ( 6) 00:09:25.107 25618.618 - 25737.775: 99.9915% ( 5) 00:09:25.107 25737.775 - 25856.931: 100.0000% ( 1) 00:09:25.107 00:09:25.107 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:25.107 ============================================================================== 00:09:25.107 Range in us Cumulative IO count 00:09:25.107 6166.342 - 6196.131: 0.0170% ( 2) 00:09:25.107 6196.131 - 6225.920: 0.0425% ( 3) 00:09:25.107 6225.920 - 6255.709: 0.0594% ( 2) 00:09:25.107 6255.709 - 6285.498: 0.0764% ( 2) 00:09:25.107 6285.498 - 6315.287: 0.1104% ( 4) 00:09:25.107 6315.287 - 6345.076: 0.1274% ( 2) 00:09:25.107 6345.076 - 6374.865: 0.1444% ( 2) 00:09:25.107 6374.865 - 6404.655: 0.1613% ( 2) 00:09:25.107 6404.655 - 6434.444: 0.1953% ( 4) 00:09:25.107 6434.444 - 6464.233: 0.2038% ( 1) 00:09:25.107 6464.233 - 6494.022: 0.2208% ( 2) 00:09:25.107 6494.022 - 6523.811: 0.2293% ( 1) 00:09:25.107 6523.811 - 6553.600: 0.2463% ( 2) 00:09:25.107 6613.178 - 6642.967: 0.2548% ( 1) 00:09:25.107 6642.967 - 6672.756: 0.2717% ( 2) 00:09:25.107 6672.756 - 6702.545: 0.2887% ( 2) 00:09:25.107 6702.545 - 6732.335: 0.2972% ( 1) 00:09:25.107 6762.124 - 6791.913: 0.3142% ( 2) 00:09:25.107 6791.913 - 6821.702: 0.3227% ( 1) 00:09:25.107 6821.702 - 6851.491: 0.3397% ( 2) 00:09:25.108 6851.491 - 6881.280: 0.3482% ( 1) 00:09:25.108 6881.280 - 6911.069: 0.3567% ( 1) 00:09:25.108 6911.069 - 6940.858: 0.3736% ( 2) 00:09:25.108 6940.858 - 6970.647: 0.3821% ( 1) 00:09:25.108 6970.647 - 7000.436: 0.3991% ( 2) 00:09:25.108 7030.225 - 7060.015: 0.4246% ( 3) 00:09:25.108 7060.015 - 7089.804: 0.4416% ( 2) 00:09:25.108 7089.804 - 7119.593: 0.4501% ( 1) 00:09:25.108 7119.593 - 7149.382: 0.4671% ( 2) 00:09:25.108 7149.382 - 7179.171: 0.4755% ( 1) 00:09:25.108 7179.171 - 7208.960: 0.4925% ( 2) 00:09:25.108 7208.960 - 7238.749: 0.5010% ( 1) 00:09:25.108 7238.749 - 7268.538: 0.5095% ( 1) 00:09:25.108 7268.538 - 7298.327: 0.5180% ( 1) 00:09:25.108 7298.327 - 7328.116: 0.5350% ( 2) 00:09:25.108 7328.116 - 7357.905: 0.5435% ( 1) 00:09:25.108 8698.415 - 8757.993: 0.5520% ( 1) 00:09:25.108 8817.571 - 8877.149: 0.6284% ( 9) 00:09:25.108 8877.149 - 8936.727: 0.7558% ( 15) 00:09:25.108 8936.727 - 8996.305: 1.0105% ( 30) 00:09:25.108 8996.305 - 9055.884: 1.2568% ( 29) 00:09:25.108 9055.884 - 9115.462: 1.4521% ( 23) 00:09:25.108 9115.462 - 9175.040: 1.6389% ( 22) 00:09:25.108 9175.040 - 9234.618: 1.9956% ( 42) 00:09:25.108 9234.618 - 9294.196: 2.3607% ( 43) 00:09:25.108 9294.196 - 9353.775: 2.9212% ( 66) 00:09:25.108 9353.775 - 9413.353: 3.6515% ( 86) 00:09:25.108 9413.353 - 9472.931: 4.5431% ( 105) 00:09:25.108 9472.931 - 9532.509: 5.4603% ( 108) 00:09:25.108 9532.509 - 9592.087: 6.5048% ( 123) 00:09:25.108 9592.087 - 9651.665: 7.8040% ( 153) 00:09:25.108 9651.665 - 9711.244: 9.6298% ( 215) 00:09:25.108 9711.244 - 9770.822: 11.5404% ( 225) 00:09:25.108 9770.822 - 9830.400: 13.6719% ( 251) 00:09:25.108 9830.400 - 9889.978: 15.8458% ( 256) 00:09:25.108 9889.978 - 9949.556: 18.2745% ( 286) 00:09:25.108 9949.556 - 10009.135: 20.5418% ( 267) 00:09:25.108 10009.135 - 10068.713: 22.7242% ( 257) 00:09:25.108 10068.713 - 10128.291: 25.2632% ( 299) 00:09:25.108 10128.291 - 10187.869: 27.9806% ( 320) 00:09:25.108 10187.869 - 10247.447: 30.6216% ( 311) 00:09:25.108 10247.447 - 10307.025: 33.1776% ( 301) 00:09:25.108 10307.025 - 10366.604: 36.0479% ( 338) 00:09:25.108 10366.604 - 10426.182: 39.4276% ( 398) 00:09:25.108 10426.182 - 10485.760: 42.1535% ( 321) 00:09:25.108 10485.760 - 10545.338: 45.0662% ( 343) 00:09:25.108 10545.338 - 10604.916: 47.9704% ( 342) 00:09:25.108 10604.916 - 10664.495: 51.0870% ( 367) 00:09:25.108 10664.495 - 10724.073: 53.9232% ( 334) 00:09:25.108 10724.073 - 10783.651: 57.0397% ( 367) 00:09:25.108 10783.651 - 10843.229: 59.7147% ( 315) 00:09:25.108 10843.229 - 10902.807: 62.2962% ( 304) 00:09:25.108 10902.807 - 10962.385: 64.8098% ( 296) 00:09:25.108 10962.385 - 11021.964: 67.0856% ( 268) 00:09:25.108 11021.964 - 11081.542: 69.1067% ( 238) 00:09:25.108 11081.542 - 11141.120: 70.9069% ( 212) 00:09:25.108 11141.120 - 11200.698: 72.7157% ( 213) 00:09:25.108 11200.698 - 11260.276: 74.3971% ( 198) 00:09:25.108 11260.276 - 11319.855: 76.0275% ( 192) 00:09:25.108 11319.855 - 11379.433: 77.5476% ( 179) 00:09:25.108 11379.433 - 11439.011: 79.0082% ( 172) 00:09:25.108 11439.011 - 11498.589: 80.3923% ( 163) 00:09:25.108 11498.589 - 11558.167: 81.8274% ( 169) 00:09:25.108 11558.167 - 11617.745: 82.9738% ( 135) 00:09:25.108 11617.745 - 11677.324: 84.0608% ( 128) 00:09:25.108 11677.324 - 11736.902: 85.1902% ( 133) 00:09:25.108 11736.902 - 11796.480: 86.0564% ( 102) 00:09:25.108 11796.480 - 11856.058: 87.0245% ( 114) 00:09:25.108 11856.058 - 11915.636: 87.7293% ( 83) 00:09:25.108 11915.636 - 11975.215: 88.3237% ( 70) 00:09:25.108 11975.215 - 12034.793: 88.9691% ( 76) 00:09:25.108 12034.793 - 12094.371: 89.5380% ( 67) 00:09:25.108 12094.371 - 12153.949: 90.0730% ( 63) 00:09:25.108 12153.949 - 12213.527: 90.6505% ( 68) 00:09:25.108 12213.527 - 12273.105: 91.2109% ( 66) 00:09:25.108 12273.105 - 12332.684: 91.7120% ( 59) 00:09:25.108 12332.684 - 12392.262: 92.3743% ( 78) 00:09:25.108 12392.262 - 12451.840: 92.9772% ( 71) 00:09:25.108 12451.840 - 12511.418: 93.5377% ( 66) 00:09:25.108 12511.418 - 12570.996: 94.0982% ( 66) 00:09:25.108 12570.996 - 12630.575: 94.5652% ( 55) 00:09:25.108 12630.575 - 12690.153: 94.9813% ( 49) 00:09:25.108 12690.153 - 12749.731: 95.3804% ( 47) 00:09:25.108 12749.731 - 12809.309: 95.7286% ( 41) 00:09:25.108 12809.309 - 12868.887: 96.0853% ( 42) 00:09:25.108 12868.887 - 12928.465: 96.3570% ( 32) 00:09:25.108 12928.465 - 12988.044: 96.5863% ( 27) 00:09:25.108 12988.044 - 13047.622: 96.7391% ( 18) 00:09:25.108 13047.622 - 13107.200: 96.9090% ( 20) 00:09:25.108 13107.200 - 13166.778: 97.0533% ( 17) 00:09:25.108 13166.778 - 13226.356: 97.1977% ( 17) 00:09:25.108 13226.356 - 13285.935: 97.2911% ( 11) 00:09:25.108 13285.935 - 13345.513: 97.3930% ( 12) 00:09:25.108 13345.513 - 13405.091: 97.5034% ( 13) 00:09:25.108 13405.091 - 13464.669: 97.5798% ( 9) 00:09:25.108 13464.669 - 13524.247: 97.6562% ( 9) 00:09:25.108 13524.247 - 13583.825: 97.7327% ( 9) 00:09:25.108 13583.825 - 13643.404: 97.7836% ( 6) 00:09:25.108 13643.404 - 13702.982: 97.8091% ( 3) 00:09:25.108 13702.982 - 13762.560: 97.8261% ( 2) 00:09:25.108 14298.764 - 14358.342: 97.8346% ( 1) 00:09:25.108 14358.342 - 14417.920: 97.9025% ( 8) 00:09:25.108 14417.920 - 14477.498: 97.9195% ( 2) 00:09:25.108 14477.498 - 14537.076: 97.9450% ( 3) 00:09:25.108 14537.076 - 14596.655: 97.9535% ( 1) 00:09:25.108 14596.655 - 14656.233: 97.9620% ( 1) 00:09:25.108 14656.233 - 14715.811: 97.9959% ( 4) 00:09:25.108 14715.811 - 14775.389: 98.0724% ( 9) 00:09:25.108 14775.389 - 14834.967: 98.1318% ( 7) 00:09:25.108 14834.967 - 14894.545: 98.1827% ( 6) 00:09:25.108 14894.545 - 14954.124: 98.2337% ( 6) 00:09:25.108 14954.124 - 15013.702: 98.2507% ( 2) 00:09:25.108 15013.702 - 15073.280: 98.3016% ( 6) 00:09:25.108 15073.280 - 15132.858: 98.3526% ( 6) 00:09:25.108 15132.858 - 15192.436: 98.3950% ( 5) 00:09:25.108 15192.436 - 15252.015: 98.4375% ( 5) 00:09:25.108 15252.015 - 15371.171: 98.5394% ( 12) 00:09:25.108 15371.171 - 15490.327: 98.6158% ( 9) 00:09:25.108 15490.327 - 15609.484: 98.7177% ( 12) 00:09:25.108 15609.484 - 15728.640: 98.7772% ( 7) 00:09:25.108 15728.640 - 15847.796: 98.8281% ( 6) 00:09:25.108 15847.796 - 15966.953: 98.8791% ( 6) 00:09:25.108 15966.953 - 16086.109: 98.9130% ( 4) 00:09:25.108 18588.393 - 18707.549: 99.0319% ( 14) 00:09:25.108 18707.549 - 18826.705: 99.0914% ( 7) 00:09:25.108 18826.705 - 18945.862: 99.1168% ( 3) 00:09:25.108 18945.862 - 19065.018: 99.1593% ( 5) 00:09:25.108 19065.018 - 19184.175: 99.1848% ( 3) 00:09:25.108 19184.175 - 19303.331: 99.2188% ( 4) 00:09:25.108 19303.331 - 19422.487: 99.2697% ( 6) 00:09:25.108 19422.487 - 19541.644: 99.3037% ( 4) 00:09:25.108 19541.644 - 19660.800: 99.3546% ( 6) 00:09:25.108 19660.800 - 19779.956: 99.3971% ( 5) 00:09:25.108 19779.956 - 19899.113: 99.4395% ( 5) 00:09:25.108 19899.113 - 20018.269: 99.4565% ( 2) 00:09:25.108 23950.429 - 24069.585: 99.4905% ( 4) 00:09:25.108 24069.585 - 24188.742: 99.5414% ( 6) 00:09:25.108 24188.742 - 24307.898: 99.5839% ( 5) 00:09:25.108 24307.898 - 24427.055: 99.6179% ( 4) 00:09:25.108 24427.055 - 24546.211: 99.6688% ( 6) 00:09:25.108 24546.211 - 24665.367: 99.7113% ( 5) 00:09:25.108 24665.367 - 24784.524: 99.7622% ( 6) 00:09:25.108 24784.524 - 24903.680: 99.8132% ( 6) 00:09:25.108 24903.680 - 25022.836: 99.8641% ( 6) 00:09:25.108 25022.836 - 25141.993: 99.9151% ( 6) 00:09:25.108 25141.993 - 25261.149: 99.9575% ( 5) 00:09:25.108 25261.149 - 25380.305: 100.0000% ( 5) 00:09:25.108 00:09:25.108 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:25.108 ============================================================================== 00:09:25.108 Range in us Cumulative IO count 00:09:25.108 5600.349 - 5630.138: 0.0764% ( 9) 00:09:25.108 5630.138 - 5659.927: 0.0934% ( 2) 00:09:25.108 5659.927 - 5689.716: 0.1613% ( 8) 00:09:25.108 5689.716 - 5719.505: 0.1783% ( 2) 00:09:25.108 5719.505 - 5749.295: 0.1953% ( 2) 00:09:25.108 5749.295 - 5779.084: 0.2038% ( 1) 00:09:25.108 5779.084 - 5808.873: 0.2123% ( 1) 00:09:25.108 5808.873 - 5838.662: 0.2293% ( 2) 00:09:25.108 5838.662 - 5868.451: 0.2378% ( 1) 00:09:25.108 5868.451 - 5898.240: 0.2463% ( 1) 00:09:25.108 5898.240 - 5928.029: 0.2632% ( 2) 00:09:25.108 5928.029 - 5957.818: 0.2717% ( 1) 00:09:25.108 5957.818 - 5987.607: 0.2802% ( 1) 00:09:25.108 5987.607 - 6017.396: 0.2887% ( 1) 00:09:25.108 6017.396 - 6047.185: 0.2972% ( 1) 00:09:25.108 6047.185 - 6076.975: 0.3142% ( 2) 00:09:25.108 6076.975 - 6106.764: 0.3227% ( 1) 00:09:25.108 6106.764 - 6136.553: 0.3312% ( 1) 00:09:25.108 6136.553 - 6166.342: 0.3482% ( 2) 00:09:25.108 6166.342 - 6196.131: 0.3567% ( 1) 00:09:25.108 6196.131 - 6225.920: 0.3736% ( 2) 00:09:25.108 6225.920 - 6255.709: 0.3821% ( 1) 00:09:25.108 6255.709 - 6285.498: 0.3906% ( 1) 00:09:25.108 6285.498 - 6315.287: 0.4076% ( 2) 00:09:25.108 6315.287 - 6345.076: 0.4161% ( 1) 00:09:25.108 6345.076 - 6374.865: 0.4331% ( 2) 00:09:25.108 6374.865 - 6404.655: 0.4416% ( 1) 00:09:25.108 6404.655 - 6434.444: 0.4586% ( 2) 00:09:25.108 6434.444 - 6464.233: 0.4671% ( 1) 00:09:25.108 6464.233 - 6494.022: 0.4840% ( 2) 00:09:25.108 6494.022 - 6523.811: 0.5010% ( 2) 00:09:25.108 6523.811 - 6553.600: 0.5095% ( 1) 00:09:25.108 6553.600 - 6583.389: 0.5265% ( 2) 00:09:25.108 6583.389 - 6613.178: 0.5350% ( 1) 00:09:25.108 6613.178 - 6642.967: 0.5435% ( 1) 00:09:25.108 8877.149 - 8936.727: 0.5605% ( 2) 00:09:25.108 8936.727 - 8996.305: 0.6369% ( 9) 00:09:25.108 8996.305 - 9055.884: 0.7473% ( 13) 00:09:25.108 9055.884 - 9115.462: 0.8747% ( 15) 00:09:25.108 9115.462 - 9175.040: 1.0190% ( 17) 00:09:25.108 9175.040 - 9234.618: 1.1464% ( 15) 00:09:25.108 9234.618 - 9294.196: 1.2908% ( 17) 00:09:25.108 9294.196 - 9353.775: 1.4946% ( 24) 00:09:25.108 9353.775 - 9413.353: 1.9361% ( 52) 00:09:25.108 9413.353 - 9472.931: 2.4881% ( 65) 00:09:25.108 9472.931 - 9532.509: 3.1675% ( 80) 00:09:25.108 9532.509 - 9592.087: 4.0336% ( 102) 00:09:25.108 9592.087 - 9651.665: 5.1376% ( 130) 00:09:25.108 9651.665 - 9711.244: 6.6321% ( 176) 00:09:25.109 9711.244 - 9770.822: 8.2116% ( 186) 00:09:25.109 9770.822 - 9830.400: 9.7656% ( 183) 00:09:25.109 9830.400 - 9889.978: 12.1603% ( 282) 00:09:25.109 9889.978 - 9949.556: 14.8607% ( 318) 00:09:25.109 9949.556 - 10009.135: 17.7989% ( 346) 00:09:25.109 10009.135 - 10068.713: 20.7371% ( 346) 00:09:25.109 10068.713 - 10128.291: 24.2442% ( 413) 00:09:25.109 10128.291 - 10187.869: 27.7429% ( 412) 00:09:25.109 10187.869 - 10247.447: 30.9528% ( 378) 00:09:25.109 10247.447 - 10307.025: 34.1118% ( 372) 00:09:25.109 10307.025 - 10366.604: 37.1858% ( 362) 00:09:25.109 10366.604 - 10426.182: 40.2259% ( 358) 00:09:25.109 10426.182 - 10485.760: 43.3169% ( 364) 00:09:25.109 10485.760 - 10545.338: 46.5438% ( 380) 00:09:25.109 10545.338 - 10604.916: 49.7452% ( 377) 00:09:25.109 10604.916 - 10664.495: 52.8278% ( 363) 00:09:25.109 10664.495 - 10724.073: 55.9188% ( 364) 00:09:25.109 10724.073 - 10783.651: 58.8485% ( 345) 00:09:25.109 10783.651 - 10843.229: 61.8037% ( 348) 00:09:25.109 10843.229 - 10902.807: 64.3937% ( 305) 00:09:25.109 10902.807 - 10962.385: 66.7204% ( 274) 00:09:25.109 10962.385 - 11021.964: 68.8349% ( 249) 00:09:25.109 11021.964 - 11081.542: 70.7286% ( 223) 00:09:25.109 11081.542 - 11141.120: 72.4524% ( 203) 00:09:25.109 11141.120 - 11200.698: 73.9640% ( 178) 00:09:25.109 11200.698 - 11260.276: 75.5435% ( 186) 00:09:25.109 11260.276 - 11319.855: 76.9446% ( 165) 00:09:25.109 11319.855 - 11379.433: 78.2099% ( 149) 00:09:25.109 11379.433 - 11439.011: 79.6875% ( 174) 00:09:25.109 11439.011 - 11498.589: 81.0377% ( 159) 00:09:25.109 11498.589 - 11558.167: 82.1756% ( 134) 00:09:25.109 11558.167 - 11617.745: 83.2201% ( 123) 00:09:25.109 11617.745 - 11677.324: 84.2901% ( 126) 00:09:25.109 11677.324 - 11736.902: 85.3685% ( 127) 00:09:25.109 11736.902 - 11796.480: 86.4300% ( 125) 00:09:25.109 11796.480 - 11856.058: 87.4151% ( 116) 00:09:25.109 11856.058 - 11915.636: 88.1709% ( 89) 00:09:25.109 11915.636 - 11975.215: 88.9351% ( 90) 00:09:25.109 11975.215 - 12034.793: 89.5805% ( 76) 00:09:25.109 12034.793 - 12094.371: 90.1664% ( 69) 00:09:25.109 12094.371 - 12153.949: 90.6929% ( 62) 00:09:25.109 12153.949 - 12213.527: 91.2279% ( 63) 00:09:25.109 12213.527 - 12273.105: 91.7374% ( 60) 00:09:25.109 12273.105 - 12332.684: 92.2724% ( 63) 00:09:25.109 12332.684 - 12392.262: 92.7904% ( 61) 00:09:25.109 12392.262 - 12451.840: 93.3169% ( 62) 00:09:25.109 12451.840 - 12511.418: 93.8264% ( 60) 00:09:25.109 12511.418 - 12570.996: 94.3020% ( 56) 00:09:25.109 12570.996 - 12630.575: 94.7605% ( 54) 00:09:25.109 12630.575 - 12690.153: 95.1936% ( 51) 00:09:25.109 12690.153 - 12749.731: 95.5757% ( 45) 00:09:25.109 12749.731 - 12809.309: 95.9154% ( 40) 00:09:25.109 12809.309 - 12868.887: 96.2551% ( 40) 00:09:25.109 12868.887 - 12928.465: 96.6118% ( 42) 00:09:25.109 12928.465 - 12988.044: 96.8325% ( 26) 00:09:25.109 12988.044 - 13047.622: 96.9684% ( 16) 00:09:25.109 13047.622 - 13107.200: 97.0788% ( 13) 00:09:25.109 13107.200 - 13166.778: 97.1977% ( 14) 00:09:25.109 13166.778 - 13226.356: 97.3421% ( 17) 00:09:25.109 13226.356 - 13285.935: 97.4440% ( 12) 00:09:25.109 13285.935 - 13345.513: 97.5543% ( 13) 00:09:25.109 13345.513 - 13405.091: 97.6647% ( 13) 00:09:25.109 13405.091 - 13464.669: 97.7412% ( 9) 00:09:25.109 13464.669 - 13524.247: 97.8091% ( 8) 00:09:25.109 13524.247 - 13583.825: 97.8261% ( 2) 00:09:25.109 14358.342 - 14417.920: 97.8431% ( 2) 00:09:25.109 14417.920 - 14477.498: 97.9110% ( 8) 00:09:25.109 14477.498 - 14537.076: 97.9535% ( 5) 00:09:25.109 14537.076 - 14596.655: 98.0129% ( 7) 00:09:25.109 14596.655 - 14656.233: 98.0469% ( 4) 00:09:25.109 14656.233 - 14715.811: 98.0639% ( 2) 00:09:25.109 14715.811 - 14775.389: 98.0893% ( 3) 00:09:25.109 14775.389 - 14834.967: 98.0978% ( 1) 00:09:25.109 14834.967 - 14894.545: 98.1148% ( 2) 00:09:25.109 14894.545 - 14954.124: 98.1658% ( 6) 00:09:25.109 14954.124 - 15013.702: 98.2507% ( 10) 00:09:25.109 15013.702 - 15073.280: 98.3271% ( 9) 00:09:25.109 15073.280 - 15132.858: 98.4290% ( 12) 00:09:25.109 15132.858 - 15192.436: 98.4800% ( 6) 00:09:25.109 15192.436 - 15252.015: 98.5224% ( 5) 00:09:25.109 15252.015 - 15371.171: 98.6158% ( 11) 00:09:25.109 15371.171 - 15490.327: 98.7007% ( 10) 00:09:25.109 15490.327 - 15609.484: 98.7517% ( 6) 00:09:25.109 15609.484 - 15728.640: 98.8026% ( 6) 00:09:25.109 15728.640 - 15847.796: 98.8536% ( 6) 00:09:25.109 15847.796 - 15966.953: 98.9046% ( 6) 00:09:25.109 15966.953 - 16086.109: 98.9130% ( 1) 00:09:25.109 18230.924 - 18350.080: 98.9810% ( 8) 00:09:25.109 18350.080 - 18469.236: 99.0489% ( 8) 00:09:25.109 18469.236 - 18588.393: 99.1084% ( 7) 00:09:25.109 18588.393 - 18707.549: 99.1508% ( 5) 00:09:25.109 18707.549 - 18826.705: 99.1678% ( 2) 00:09:25.109 18826.705 - 18945.862: 99.2188% ( 6) 00:09:25.109 18945.862 - 19065.018: 99.2612% ( 5) 00:09:25.109 19065.018 - 19184.175: 99.3037% ( 5) 00:09:25.109 19184.175 - 19303.331: 99.3546% ( 6) 00:09:25.109 19303.331 - 19422.487: 99.3886% ( 4) 00:09:25.109 19422.487 - 19541.644: 99.4310% ( 5) 00:09:25.109 19541.644 - 19660.800: 99.4565% ( 3) 00:09:25.109 22758.865 - 22878.022: 99.4650% ( 1) 00:09:25.109 22878.022 - 22997.178: 99.4735% ( 1) 00:09:25.109 22997.178 - 23116.335: 99.5160% ( 5) 00:09:25.109 23116.335 - 23235.491: 99.5329% ( 2) 00:09:25.109 23592.960 - 23712.116: 99.5754% ( 5) 00:09:25.109 23712.116 - 23831.273: 99.7113% ( 16) 00:09:25.109 23831.273 - 23950.429: 99.7877% ( 9) 00:09:25.109 23950.429 - 24069.585: 99.8302% ( 5) 00:09:25.109 24069.585 - 24188.742: 99.8811% ( 6) 00:09:25.109 24188.742 - 24307.898: 99.9236% ( 5) 00:09:25.109 24307.898 - 24427.055: 99.9660% ( 5) 00:09:25.109 24427.055 - 24546.211: 100.0000% ( 4) 00:09:25.109 00:09:25.109 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:25.109 ============================================================================== 00:09:25.109 Range in us Cumulative IO count 00:09:25.109 4676.887 - 4706.676: 0.0085% ( 1) 00:09:25.109 4885.411 - 4915.200: 0.0764% ( 8) 00:09:25.109 4915.200 - 4944.989: 0.1783% ( 12) 00:09:25.109 4944.989 - 4974.778: 0.2548% ( 9) 00:09:25.109 4974.778 - 5004.567: 0.2632% ( 1) 00:09:25.109 5004.567 - 5034.356: 0.2802% ( 2) 00:09:25.109 5034.356 - 5064.145: 0.2972% ( 2) 00:09:25.109 5064.145 - 5093.935: 0.3142% ( 2) 00:09:25.109 5093.935 - 5123.724: 0.3312% ( 2) 00:09:25.109 5123.724 - 5153.513: 0.3482% ( 2) 00:09:25.109 5153.513 - 5183.302: 0.3651% ( 2) 00:09:25.109 5183.302 - 5213.091: 0.3821% ( 2) 00:09:25.109 5213.091 - 5242.880: 0.3991% ( 2) 00:09:25.109 5242.880 - 5272.669: 0.4161% ( 2) 00:09:25.109 5272.669 - 5302.458: 0.4246% ( 1) 00:09:25.109 5302.458 - 5332.247: 0.4416% ( 2) 00:09:25.109 5332.247 - 5362.036: 0.4501% ( 1) 00:09:25.109 5362.036 - 5391.825: 0.4586% ( 1) 00:09:25.109 5391.825 - 5421.615: 0.4671% ( 1) 00:09:25.109 5421.615 - 5451.404: 0.4840% ( 2) 00:09:25.109 5451.404 - 5481.193: 0.4925% ( 1) 00:09:25.109 5481.193 - 5510.982: 0.5095% ( 2) 00:09:25.109 5510.982 - 5540.771: 0.5265% ( 2) 00:09:25.109 5540.771 - 5570.560: 0.5435% ( 2) 00:09:25.109 8757.993 - 8817.571: 0.7558% ( 25) 00:09:25.109 8817.571 - 8877.149: 0.7812% ( 3) 00:09:25.109 8877.149 - 8936.727: 0.7982% ( 2) 00:09:25.109 8936.727 - 8996.305: 0.8916% ( 11) 00:09:25.109 8996.305 - 9055.884: 0.9766% ( 10) 00:09:25.109 9055.884 - 9115.462: 1.1549% ( 21) 00:09:25.109 9115.462 - 9175.040: 1.3077% ( 18) 00:09:25.109 9175.040 - 9234.618: 1.4946% ( 22) 00:09:25.109 9234.618 - 9294.196: 1.6814% ( 22) 00:09:25.109 9294.196 - 9353.775: 1.9022% ( 26) 00:09:25.109 9353.775 - 9413.353: 2.4626% ( 66) 00:09:25.109 9413.353 - 9472.931: 2.9552% ( 58) 00:09:25.109 9472.931 - 9532.509: 3.6260% ( 79) 00:09:25.109 9532.509 - 9592.087: 4.5601% ( 110) 00:09:25.109 9592.087 - 9651.665: 5.6980% ( 134) 00:09:25.109 9651.665 - 9711.244: 6.8444% ( 135) 00:09:25.109 9711.244 - 9770.822: 8.3730% ( 180) 00:09:25.109 9770.822 - 9830.400: 10.4535% ( 245) 00:09:25.109 9830.400 - 9889.978: 12.5510% ( 247) 00:09:25.109 9889.978 - 9949.556: 15.1495% ( 306) 00:09:25.109 9949.556 - 10009.135: 18.0112% ( 337) 00:09:25.109 10009.135 - 10068.713: 21.1107% ( 365) 00:09:25.109 10068.713 - 10128.291: 24.4650% ( 395) 00:09:25.109 10128.291 - 10187.869: 27.6664% ( 377) 00:09:25.109 10187.869 - 10247.447: 30.8254% ( 372) 00:09:25.109 10247.447 - 10307.025: 34.3665% ( 417) 00:09:25.109 10307.025 - 10366.604: 37.2877% ( 344) 00:09:25.109 10366.604 - 10426.182: 40.3957% ( 366) 00:09:25.109 10426.182 - 10485.760: 43.7160% ( 391) 00:09:25.109 10485.760 - 10545.338: 46.9514% ( 381) 00:09:25.109 10545.338 - 10604.916: 50.0594% ( 366) 00:09:25.109 10604.916 - 10664.495: 53.3118% ( 383) 00:09:25.109 10664.495 - 10724.073: 56.3094% ( 353) 00:09:25.109 10724.073 - 10783.651: 59.3156% ( 354) 00:09:25.109 10783.651 - 10843.229: 62.1773% ( 337) 00:09:25.109 10843.229 - 10902.807: 64.8013% ( 309) 00:09:25.109 10902.807 - 10962.385: 67.2554% ( 289) 00:09:25.109 10962.385 - 11021.964: 69.4718% ( 261) 00:09:25.109 11021.964 - 11081.542: 71.3400% ( 220) 00:09:25.109 11081.542 - 11141.120: 73.0214% ( 198) 00:09:25.109 11141.120 - 11200.698: 74.5329% ( 178) 00:09:25.109 11200.698 - 11260.276: 75.8492% ( 155) 00:09:25.109 11260.276 - 11319.855: 77.1060% ( 148) 00:09:25.109 11319.855 - 11379.433: 78.3118% ( 142) 00:09:25.109 11379.433 - 11439.011: 79.4327% ( 132) 00:09:25.109 11439.011 - 11498.589: 80.6131% ( 139) 00:09:25.109 11498.589 - 11558.167: 81.8444% ( 145) 00:09:25.109 11558.167 - 11617.745: 83.0673% ( 144) 00:09:25.109 11617.745 - 11677.324: 84.0863% ( 120) 00:09:25.109 11677.324 - 11736.902: 85.1223% ( 122) 00:09:25.109 11736.902 - 11796.480: 86.1073% ( 116) 00:09:25.109 11796.480 - 11856.058: 87.0075% ( 106) 00:09:25.109 11856.058 - 11915.636: 87.7972% ( 93) 00:09:25.109 11915.636 - 11975.215: 88.5360% ( 87) 00:09:25.109 11975.215 - 12034.793: 89.2238% ( 81) 00:09:25.109 12034.793 - 12094.371: 89.8353% ( 72) 00:09:25.109 12094.371 - 12153.949: 90.5231% ( 81) 00:09:25.109 12153.949 - 12213.527: 91.1430% ( 73) 00:09:25.109 12213.527 - 12273.105: 91.7204% ( 68) 00:09:25.109 12273.105 - 12332.684: 92.3573% ( 75) 00:09:25.109 12332.684 - 12392.262: 93.0282% ( 79) 00:09:25.109 12392.262 - 12451.840: 93.6396% ( 72) 00:09:25.110 12451.840 - 12511.418: 94.0642% ( 50) 00:09:25.110 12511.418 - 12570.996: 94.4039% ( 40) 00:09:25.110 12570.996 - 12630.575: 94.7690% ( 43) 00:09:25.110 12630.575 - 12690.153: 95.0917% ( 38) 00:09:25.110 12690.153 - 12749.731: 95.3889% ( 35) 00:09:25.110 12749.731 - 12809.309: 95.7456% ( 42) 00:09:25.110 12809.309 - 12868.887: 96.1107% ( 43) 00:09:25.110 12868.887 - 12928.465: 96.3315% ( 26) 00:09:25.110 12928.465 - 12988.044: 96.5353% ( 24) 00:09:25.110 12988.044 - 13047.622: 96.7137% ( 21) 00:09:25.110 13047.622 - 13107.200: 96.8750% ( 19) 00:09:25.110 13107.200 - 13166.778: 97.0024% ( 15) 00:09:25.110 13166.778 - 13226.356: 97.1213% ( 14) 00:09:25.110 13226.356 - 13285.935: 97.2486% ( 15) 00:09:25.110 13285.935 - 13345.513: 97.3421% ( 11) 00:09:25.110 13345.513 - 13405.091: 97.4355% ( 11) 00:09:25.110 13405.091 - 13464.669: 97.5034% ( 8) 00:09:25.110 13464.669 - 13524.247: 97.5713% ( 8) 00:09:25.110 13524.247 - 13583.825: 97.6308% ( 7) 00:09:25.110 13583.825 - 13643.404: 97.6902% ( 7) 00:09:25.110 13643.404 - 13702.982: 97.7412% ( 6) 00:09:25.110 13702.982 - 13762.560: 97.7751% ( 4) 00:09:25.110 13762.560 - 13822.138: 97.7921% ( 2) 00:09:25.110 13822.138 - 13881.716: 97.8176% ( 3) 00:09:25.110 13881.716 - 13941.295: 97.8261% ( 1) 00:09:25.110 14358.342 - 14417.920: 97.8601% ( 4) 00:09:25.110 14417.920 - 14477.498: 97.9195% ( 7) 00:09:25.110 14477.498 - 14537.076: 97.9704% ( 6) 00:09:25.110 14537.076 - 14596.655: 98.0129% ( 5) 00:09:25.110 14596.655 - 14656.233: 98.0299% ( 2) 00:09:25.110 14656.233 - 14715.811: 98.0384% ( 1) 00:09:25.110 14715.811 - 14775.389: 98.0554% ( 2) 00:09:25.110 14775.389 - 14834.967: 98.0724% ( 2) 00:09:25.110 14834.967 - 14894.545: 98.0808% ( 1) 00:09:25.110 14894.545 - 14954.124: 98.1403% ( 7) 00:09:25.110 14954.124 - 15013.702: 98.2167% ( 9) 00:09:25.110 15013.702 - 15073.280: 98.3016% ( 10) 00:09:25.110 15073.280 - 15132.858: 98.3781% ( 9) 00:09:25.110 15132.858 - 15192.436: 98.4630% ( 10) 00:09:25.110 15192.436 - 15252.015: 98.5309% ( 8) 00:09:25.110 15252.015 - 15371.171: 98.6158% ( 10) 00:09:25.110 15371.171 - 15490.327: 98.7007% ( 10) 00:09:25.110 15490.327 - 15609.484: 98.8026% ( 12) 00:09:25.110 15609.484 - 15728.640: 98.8366% ( 4) 00:09:25.110 15728.640 - 15847.796: 98.8791% ( 5) 00:09:25.110 15847.796 - 15966.953: 98.9130% ( 4) 00:09:25.110 18111.767 - 18230.924: 98.9640% ( 6) 00:09:25.110 18230.924 - 18350.080: 99.0404% ( 9) 00:09:25.110 18350.080 - 18469.236: 99.1084% ( 8) 00:09:25.110 18469.236 - 18588.393: 99.1763% ( 8) 00:09:25.110 18588.393 - 18707.549: 99.2357% ( 7) 00:09:25.110 18707.549 - 18826.705: 99.2782% ( 5) 00:09:25.110 18826.705 - 18945.862: 99.3291% ( 6) 00:09:25.110 18945.862 - 19065.018: 99.3716% ( 5) 00:09:25.110 19065.018 - 19184.175: 99.4141% ( 5) 00:09:25.110 19184.175 - 19303.331: 99.4565% ( 5) 00:09:25.110 22639.709 - 22758.865: 99.4735% ( 2) 00:09:25.110 22758.865 - 22878.022: 99.5075% ( 4) 00:09:25.110 23235.491 - 23354.647: 99.5160% ( 1) 00:09:25.110 23354.647 - 23473.804: 99.5669% ( 6) 00:09:25.110 23473.804 - 23592.960: 99.6009% ( 4) 00:09:25.110 23592.960 - 23712.116: 99.6518% ( 6) 00:09:25.110 23712.116 - 23831.273: 99.7028% ( 6) 00:09:25.110 23831.273 - 23950.429: 99.7622% ( 7) 00:09:25.110 23950.429 - 24069.585: 99.8132% ( 6) 00:09:25.110 24069.585 - 24188.742: 99.8726% ( 7) 00:09:25.110 24188.742 - 24307.898: 99.9236% ( 6) 00:09:25.110 24307.898 - 24427.055: 99.9745% ( 6) 00:09:25.110 24427.055 - 24546.211: 100.0000% ( 3) 00:09:25.110 00:09:25.110 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:25.110 ============================================================================== 00:09:25.110 Range in us Cumulative IO count 00:09:25.110 4319.418 - 4349.207: 0.0085% ( 1) 00:09:25.110 4408.785 - 4438.575: 0.0510% ( 5) 00:09:25.110 4438.575 - 4468.364: 0.1698% ( 14) 00:09:25.110 4468.364 - 4498.153: 0.2038% ( 4) 00:09:25.110 4498.153 - 4527.942: 0.2123% ( 1) 00:09:25.110 4527.942 - 4557.731: 0.2293% ( 2) 00:09:25.110 4557.731 - 4587.520: 0.2463% ( 2) 00:09:25.110 4587.520 - 4617.309: 0.2632% ( 2) 00:09:25.110 4617.309 - 4647.098: 0.2802% ( 2) 00:09:25.110 4647.098 - 4676.887: 0.2972% ( 2) 00:09:25.110 4676.887 - 4706.676: 0.3142% ( 2) 00:09:25.110 4706.676 - 4736.465: 0.3312% ( 2) 00:09:25.110 4736.465 - 4766.255: 0.3397% ( 1) 00:09:25.110 4766.255 - 4796.044: 0.3567% ( 2) 00:09:25.110 4796.044 - 4825.833: 0.3736% ( 2) 00:09:25.110 4825.833 - 4855.622: 0.3906% ( 2) 00:09:25.110 4855.622 - 4885.411: 0.4076% ( 2) 00:09:25.110 4885.411 - 4915.200: 0.4246% ( 2) 00:09:25.110 4915.200 - 4944.989: 0.4416% ( 2) 00:09:25.110 4944.989 - 4974.778: 0.4586% ( 2) 00:09:25.110 4974.778 - 5004.567: 0.4671% ( 1) 00:09:25.110 5004.567 - 5034.356: 0.4840% ( 2) 00:09:25.110 5034.356 - 5064.145: 0.5010% ( 2) 00:09:25.110 5064.145 - 5093.935: 0.5180% ( 2) 00:09:25.110 5093.935 - 5123.724: 0.5350% ( 2) 00:09:25.110 5123.724 - 5153.513: 0.5435% ( 1) 00:09:25.110 8162.211 - 8221.789: 0.7388% ( 23) 00:09:25.110 8221.789 - 8281.367: 0.7643% ( 3) 00:09:25.110 8281.367 - 8340.945: 0.7812% ( 2) 00:09:25.110 8340.945 - 8400.524: 0.8067% ( 3) 00:09:25.110 8400.524 - 8460.102: 0.8322% ( 3) 00:09:25.110 8460.102 - 8519.680: 0.8407% ( 1) 00:09:25.110 8519.680 - 8579.258: 0.8662% ( 3) 00:09:25.110 8579.258 - 8638.836: 0.8832% ( 2) 00:09:25.110 8638.836 - 8698.415: 0.9086% ( 3) 00:09:25.110 8698.415 - 8757.993: 0.9341% ( 3) 00:09:25.110 8757.993 - 8817.571: 0.9681% ( 4) 00:09:25.110 8817.571 - 8877.149: 1.0445% ( 9) 00:09:25.110 8877.149 - 8936.727: 1.1379% ( 11) 00:09:25.110 8936.727 - 8996.305: 1.2398% ( 12) 00:09:25.110 8996.305 - 9055.884: 1.3332% ( 11) 00:09:25.110 9055.884 - 9115.462: 1.4776% ( 17) 00:09:25.110 9115.462 - 9175.040: 1.6135% ( 16) 00:09:25.110 9175.040 - 9234.618: 1.7493% ( 16) 00:09:25.110 9234.618 - 9294.196: 1.9446% ( 23) 00:09:25.110 9294.196 - 9353.775: 2.1315% ( 22) 00:09:25.110 9353.775 - 9413.353: 2.4202% ( 34) 00:09:25.110 9413.353 - 9472.931: 2.8618% ( 52) 00:09:25.110 9472.931 - 9532.509: 3.3458% ( 57) 00:09:25.110 9532.509 - 9592.087: 4.0846% ( 87) 00:09:25.110 9592.087 - 9651.665: 4.9507% ( 102) 00:09:25.110 9651.665 - 9711.244: 6.0971% ( 135) 00:09:25.110 9711.244 - 9770.822: 7.4898% ( 164) 00:09:25.110 9770.822 - 9830.400: 9.2476% ( 207) 00:09:25.110 9830.400 - 9889.978: 11.4555% ( 260) 00:09:25.110 9889.978 - 9949.556: 13.8672% ( 284) 00:09:25.110 9949.556 - 10009.135: 16.8393% ( 350) 00:09:25.110 10009.135 - 10068.713: 20.3210% ( 410) 00:09:25.110 10068.713 - 10128.291: 23.9046% ( 422) 00:09:25.110 10128.291 - 10187.869: 27.4372% ( 416) 00:09:25.110 10187.869 - 10247.447: 30.7829% ( 394) 00:09:25.110 10247.447 - 10307.025: 34.1287% ( 394) 00:09:25.110 10307.025 - 10366.604: 37.7378% ( 425) 00:09:25.110 10366.604 - 10426.182: 41.1515% ( 402) 00:09:25.110 10426.182 - 10485.760: 44.2510% ( 365) 00:09:25.110 10485.760 - 10545.338: 47.5034% ( 383) 00:09:25.110 10545.338 - 10604.916: 50.6624% ( 372) 00:09:25.110 10604.916 - 10664.495: 53.8383% ( 374) 00:09:25.110 10664.495 - 10724.073: 57.0737% ( 381) 00:09:25.110 10724.073 - 10783.651: 60.2751% ( 377) 00:09:25.110 10783.651 - 10843.229: 63.1709% ( 341) 00:09:25.110 10843.229 - 10902.807: 65.6590% ( 293) 00:09:25.110 10902.807 - 10962.385: 67.8584% ( 259) 00:09:25.110 10962.385 - 11021.964: 69.8624% ( 236) 00:09:25.110 11021.964 - 11081.542: 71.6967% ( 216) 00:09:25.110 11081.542 - 11141.120: 73.2082% ( 178) 00:09:25.110 11141.120 - 11200.698: 74.6349% ( 168) 00:09:25.110 11200.698 - 11260.276: 75.9681% ( 157) 00:09:25.110 11260.276 - 11319.855: 77.3862% ( 167) 00:09:25.110 11319.855 - 11379.433: 78.6600% ( 150) 00:09:25.110 11379.433 - 11439.011: 79.9338% ( 150) 00:09:25.110 11439.011 - 11498.589: 81.0632% ( 133) 00:09:25.110 11498.589 - 11558.167: 82.1756% ( 131) 00:09:25.110 11558.167 - 11617.745: 83.1776% ( 118) 00:09:25.110 11617.745 - 11677.324: 84.0014% ( 97) 00:09:25.110 11677.324 - 11736.902: 84.9694% ( 114) 00:09:25.110 11736.902 - 11796.480: 85.9035% ( 110) 00:09:25.110 11796.480 - 11856.058: 86.7442% ( 99) 00:09:25.110 11856.058 - 11915.636: 87.6444% ( 106) 00:09:25.110 11915.636 - 11975.215: 88.5190% ( 103) 00:09:25.110 11975.215 - 12034.793: 89.2238% ( 83) 00:09:25.110 12034.793 - 12094.371: 89.8013% ( 68) 00:09:25.110 12094.371 - 12153.949: 90.4467% ( 76) 00:09:25.110 12153.949 - 12213.527: 91.1260% ( 80) 00:09:25.110 12213.527 - 12273.105: 91.7799% ( 77) 00:09:25.110 12273.105 - 12332.684: 92.4677% ( 81) 00:09:25.110 12332.684 - 12392.262: 93.0452% ( 68) 00:09:25.110 12392.262 - 12451.840: 93.6056% ( 66) 00:09:25.110 12451.840 - 12511.418: 94.0387% ( 51) 00:09:25.110 12511.418 - 12570.996: 94.4209% ( 45) 00:09:25.110 12570.996 - 12630.575: 94.8200% ( 47) 00:09:25.110 12630.575 - 12690.153: 95.1936% ( 44) 00:09:25.110 12690.153 - 12749.731: 95.4993% ( 36) 00:09:25.110 12749.731 - 12809.309: 95.7371% ( 28) 00:09:25.110 12809.309 - 12868.887: 95.9239% ( 22) 00:09:25.110 12868.887 - 12928.465: 96.1022% ( 21) 00:09:25.110 12928.465 - 12988.044: 96.2466% ( 17) 00:09:25.110 12988.044 - 13047.622: 96.4249% ( 21) 00:09:25.110 13047.622 - 13107.200: 96.5778% ( 18) 00:09:25.110 13107.200 - 13166.778: 96.7221% ( 17) 00:09:25.110 13166.778 - 13226.356: 96.8580% ( 16) 00:09:25.110 13226.356 - 13285.935: 96.9599% ( 12) 00:09:25.110 13285.935 - 13345.513: 97.0703% ( 13) 00:09:25.110 13345.513 - 13405.091: 97.1722% ( 12) 00:09:25.110 13405.091 - 13464.669: 97.2232% ( 6) 00:09:25.110 13464.669 - 13524.247: 97.2826% ( 7) 00:09:25.111 13524.247 - 13583.825: 97.3590% ( 9) 00:09:25.111 13583.825 - 13643.404: 97.4015% ( 5) 00:09:25.111 13643.404 - 13702.982: 97.4524% ( 6) 00:09:25.111 13702.982 - 13762.560: 97.5119% ( 7) 00:09:25.111 13762.560 - 13822.138: 97.5883% ( 9) 00:09:25.111 13822.138 - 13881.716: 97.6647% ( 9) 00:09:25.111 13881.716 - 13941.295: 97.7072% ( 5) 00:09:25.111 13941.295 - 14000.873: 97.7242% ( 2) 00:09:25.111 14000.873 - 14060.451: 97.7412% ( 2) 00:09:25.111 14060.451 - 14120.029: 97.7497% ( 1) 00:09:25.111 14120.029 - 14179.607: 97.7751% ( 3) 00:09:25.111 14179.607 - 14239.185: 97.8006% ( 3) 00:09:25.111 14239.185 - 14298.764: 97.8261% ( 3) 00:09:25.111 14298.764 - 14358.342: 97.9110% ( 10) 00:09:25.111 14358.342 - 14417.920: 97.9280% ( 2) 00:09:25.111 14417.920 - 14477.498: 97.9450% ( 2) 00:09:25.111 14477.498 - 14537.076: 97.9789% ( 4) 00:09:25.111 14537.076 - 14596.655: 98.0044% ( 3) 00:09:25.111 14596.655 - 14656.233: 98.0554% ( 6) 00:09:25.111 14656.233 - 14715.811: 98.1148% ( 7) 00:09:25.111 14715.811 - 14775.389: 98.1658% ( 6) 00:09:25.111 14775.389 - 14834.967: 98.2167% ( 6) 00:09:25.111 14834.967 - 14894.545: 98.2762% ( 7) 00:09:25.111 14894.545 - 14954.124: 98.3271% ( 6) 00:09:25.111 14954.124 - 15013.702: 98.3865% ( 7) 00:09:25.111 15013.702 - 15073.280: 98.4375% ( 6) 00:09:25.111 15073.280 - 15132.858: 98.4885% ( 6) 00:09:25.111 15132.858 - 15192.436: 98.5479% ( 7) 00:09:25.111 15192.436 - 15252.015: 98.5988% ( 6) 00:09:25.111 15252.015 - 15371.171: 98.7092% ( 13) 00:09:25.111 15371.171 - 15490.327: 98.7687% ( 7) 00:09:25.111 15490.327 - 15609.484: 98.8196% ( 6) 00:09:25.111 15609.484 - 15728.640: 98.8706% ( 6) 00:09:25.111 15728.640 - 15847.796: 98.9130% ( 5) 00:09:25.111 17396.829 - 17515.985: 98.9640% ( 6) 00:09:25.111 17515.985 - 17635.142: 99.0149% ( 6) 00:09:25.111 17635.142 - 17754.298: 99.0659% ( 6) 00:09:25.111 17754.298 - 17873.455: 99.1253% ( 7) 00:09:25.111 17873.455 - 17992.611: 99.1678% ( 5) 00:09:25.111 17992.611 - 18111.767: 99.2188% ( 6) 00:09:25.111 18111.767 - 18230.924: 99.2697% ( 6) 00:09:25.111 18230.924 - 18350.080: 99.3207% ( 6) 00:09:25.111 18350.080 - 18469.236: 99.3801% ( 7) 00:09:25.111 18469.236 - 18588.393: 99.4310% ( 6) 00:09:25.111 18588.393 - 18707.549: 99.4565% ( 3) 00:09:25.111 22639.709 - 22758.865: 99.4650% ( 1) 00:09:25.111 22758.865 - 22878.022: 99.5075% ( 5) 00:09:25.111 22878.022 - 22997.178: 99.5584% ( 6) 00:09:25.111 22997.178 - 23116.335: 99.6094% ( 6) 00:09:25.111 23116.335 - 23235.491: 99.6518% ( 5) 00:09:25.111 23235.491 - 23354.647: 99.7028% ( 6) 00:09:25.111 23354.647 - 23473.804: 99.7537% ( 6) 00:09:25.111 23473.804 - 23592.960: 99.8047% ( 6) 00:09:25.111 23592.960 - 23712.116: 99.8641% ( 7) 00:09:25.111 23712.116 - 23831.273: 99.9236% ( 7) 00:09:25.111 23831.273 - 23950.429: 99.9745% ( 6) 00:09:25.111 23950.429 - 24069.585: 100.0000% ( 3) 00:09:25.111 00:09:25.111 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:25.111 ============================================================================== 00:09:25.111 Range in us Cumulative IO count 00:09:25.111 3991.738 - 4021.527: 0.0340% ( 4) 00:09:25.111 4021.527 - 4051.316: 0.1274% ( 11) 00:09:25.111 4051.316 - 4081.105: 0.1698% ( 5) 00:09:25.111 4081.105 - 4110.895: 0.2208% ( 6) 00:09:25.111 4110.895 - 4140.684: 0.2378% ( 2) 00:09:25.111 4140.684 - 4170.473: 0.2548% ( 2) 00:09:25.111 4170.473 - 4200.262: 0.2717% ( 2) 00:09:25.111 4200.262 - 4230.051: 0.2887% ( 2) 00:09:25.111 4230.051 - 4259.840: 0.3057% ( 2) 00:09:25.111 4259.840 - 4289.629: 0.3227% ( 2) 00:09:25.111 4289.629 - 4319.418: 0.3397% ( 2) 00:09:25.111 4319.418 - 4349.207: 0.3567% ( 2) 00:09:25.111 4349.207 - 4378.996: 0.3736% ( 2) 00:09:25.111 4378.996 - 4408.785: 0.3821% ( 1) 00:09:25.111 4408.785 - 4438.575: 0.3906% ( 1) 00:09:25.111 4438.575 - 4468.364: 0.4076% ( 2) 00:09:25.111 4468.364 - 4498.153: 0.4246% ( 2) 00:09:25.111 4498.153 - 4527.942: 0.4416% ( 2) 00:09:25.111 4527.942 - 4557.731: 0.4501% ( 1) 00:09:25.111 4557.731 - 4587.520: 0.4671% ( 2) 00:09:25.111 4587.520 - 4617.309: 0.4755% ( 1) 00:09:25.111 4617.309 - 4647.098: 0.4925% ( 2) 00:09:25.111 4647.098 - 4676.887: 0.5095% ( 2) 00:09:25.111 4676.887 - 4706.676: 0.5265% ( 2) 00:09:25.111 4706.676 - 4736.465: 0.5435% ( 2) 00:09:25.111 7626.007 - 7685.585: 0.5605% ( 2) 00:09:25.111 7685.585 - 7745.164: 0.7218% ( 19) 00:09:25.111 7745.164 - 7804.742: 0.8237% ( 12) 00:09:25.111 7804.742 - 7864.320: 0.8492% ( 3) 00:09:25.111 7864.320 - 7923.898: 0.8662% ( 2) 00:09:25.111 7923.898 - 7983.476: 0.8832% ( 2) 00:09:25.111 7983.476 - 8043.055: 0.9086% ( 3) 00:09:25.111 8043.055 - 8102.633: 0.9256% ( 2) 00:09:25.111 8102.633 - 8162.211: 0.9426% ( 2) 00:09:25.111 8162.211 - 8221.789: 0.9681% ( 3) 00:09:25.111 8221.789 - 8281.367: 0.9935% ( 3) 00:09:25.111 8281.367 - 8340.945: 1.0190% ( 3) 00:09:25.111 8340.945 - 8400.524: 1.0530% ( 4) 00:09:25.111 8400.524 - 8460.102: 1.0700% ( 2) 00:09:25.111 8460.102 - 8519.680: 1.0870% ( 2) 00:09:25.111 8936.727 - 8996.305: 1.1209% ( 4) 00:09:25.111 8996.305 - 9055.884: 1.2143% ( 11) 00:09:25.111 9055.884 - 9115.462: 1.3502% ( 16) 00:09:25.111 9115.462 - 9175.040: 1.4861% ( 16) 00:09:25.111 9175.040 - 9234.618: 1.6644% ( 21) 00:09:25.111 9234.618 - 9294.196: 1.8512% ( 22) 00:09:25.111 9294.196 - 9353.775: 2.1994% ( 41) 00:09:25.111 9353.775 - 9413.353: 2.5136% ( 37) 00:09:25.111 9413.353 - 9472.931: 2.9637% ( 53) 00:09:25.111 9472.931 - 9532.509: 3.3288% ( 43) 00:09:25.111 9532.509 - 9592.087: 3.9317% ( 71) 00:09:25.111 9592.087 - 9651.665: 4.7469% ( 96) 00:09:25.111 9651.665 - 9711.244: 5.8424% ( 129) 00:09:25.111 9711.244 - 9770.822: 7.4304% ( 187) 00:09:25.111 9770.822 - 9830.400: 9.2986% ( 220) 00:09:25.111 9830.400 - 9889.978: 11.6763% ( 280) 00:09:25.111 9889.978 - 9949.556: 14.3597% ( 316) 00:09:25.111 9949.556 - 10009.135: 17.3319% ( 350) 00:09:25.111 10009.135 - 10068.713: 20.3125% ( 351) 00:09:25.111 10068.713 - 10128.291: 23.8196% ( 413) 00:09:25.111 10128.291 - 10187.869: 27.1994% ( 398) 00:09:25.111 10187.869 - 10247.447: 30.6471% ( 406) 00:09:25.111 10247.447 - 10307.025: 34.0014% ( 395) 00:09:25.111 10307.025 - 10366.604: 37.3132% ( 390) 00:09:25.111 10366.604 - 10426.182: 40.5740% ( 384) 00:09:25.111 10426.182 - 10485.760: 43.7330% ( 372) 00:09:25.111 10485.760 - 10545.338: 47.0788% ( 394) 00:09:25.111 10545.338 - 10604.916: 50.2887% ( 378) 00:09:25.111 10604.916 - 10664.495: 53.3798% ( 364) 00:09:25.111 10664.495 - 10724.073: 56.6236% ( 382) 00:09:25.111 10724.073 - 10783.651: 59.5618% ( 346) 00:09:25.111 10783.651 - 10843.229: 62.3981% ( 334) 00:09:25.111 10843.229 - 10902.807: 64.7928% ( 282) 00:09:25.111 10902.807 - 10962.385: 66.9412% ( 253) 00:09:25.111 10962.385 - 11021.964: 68.9283% ( 234) 00:09:25.111 11021.964 - 11081.542: 70.5503% ( 191) 00:09:25.111 11081.542 - 11141.120: 72.0873% ( 181) 00:09:25.111 11141.120 - 11200.698: 73.3950% ( 154) 00:09:25.111 11200.698 - 11260.276: 74.9575% ( 184) 00:09:25.111 11260.276 - 11319.855: 76.2738% ( 155) 00:09:25.111 11319.855 - 11379.433: 77.5560% ( 151) 00:09:25.111 11379.433 - 11439.011: 78.8893% ( 157) 00:09:25.111 11439.011 - 11498.589: 80.2310% ( 158) 00:09:25.111 11498.589 - 11558.167: 81.4283% ( 141) 00:09:25.111 11558.167 - 11617.745: 82.4304% ( 118) 00:09:25.111 11617.745 - 11677.324: 83.4154% ( 116) 00:09:25.111 11677.324 - 11736.902: 84.2561% ( 99) 00:09:25.111 11736.902 - 11796.480: 85.1138% ( 101) 00:09:25.111 11796.480 - 11856.058: 86.1923% ( 127) 00:09:25.111 11856.058 - 11915.636: 87.2368% ( 123) 00:09:25.111 11915.636 - 11975.215: 88.1793% ( 111) 00:09:25.111 11975.215 - 12034.793: 89.1474% ( 114) 00:09:25.111 12034.793 - 12094.371: 89.9711% ( 97) 00:09:25.111 12094.371 - 12153.949: 90.8458% ( 103) 00:09:25.111 12153.949 - 12213.527: 91.7035% ( 101) 00:09:25.111 12213.527 - 12273.105: 92.3828% ( 80) 00:09:25.111 12273.105 - 12332.684: 92.8923% ( 60) 00:09:25.111 12332.684 - 12392.262: 93.3594% ( 55) 00:09:25.111 12392.262 - 12451.840: 93.8774% ( 61) 00:09:25.111 12451.840 - 12511.418: 94.4039% ( 62) 00:09:25.111 12511.418 - 12570.996: 94.9643% ( 66) 00:09:25.111 12570.996 - 12630.575: 95.3974% ( 51) 00:09:25.111 12630.575 - 12690.153: 95.7965% ( 47) 00:09:25.111 12690.153 - 12749.731: 96.1702% ( 44) 00:09:25.111 12749.731 - 12809.309: 96.5014% ( 39) 00:09:25.111 12809.309 - 12868.887: 96.7816% ( 33) 00:09:25.111 12868.887 - 12928.465: 96.9684% ( 22) 00:09:25.111 12928.465 - 12988.044: 97.1637% ( 23) 00:09:25.111 12988.044 - 13047.622: 97.3081% ( 17) 00:09:25.111 13047.622 - 13107.200: 97.4270% ( 14) 00:09:25.112 13107.200 - 13166.778: 97.5289% ( 12) 00:09:25.112 13166.778 - 13226.356: 97.5883% ( 7) 00:09:25.112 13226.356 - 13285.935: 97.6562% ( 8) 00:09:25.112 13285.935 - 13345.513: 97.7327% ( 9) 00:09:25.112 13345.513 - 13405.091: 97.7666% ( 4) 00:09:25.112 13405.091 - 13464.669: 97.7921% ( 3) 00:09:25.112 13464.669 - 13524.247: 97.8176% ( 3) 00:09:25.112 13524.247 - 13583.825: 97.8261% ( 1) 00:09:25.112 14120.029 - 14179.607: 97.8516% ( 3) 00:09:25.112 14179.607 - 14239.185: 97.8940% ( 5) 00:09:25.112 14239.185 - 14298.764: 97.9789% ( 10) 00:09:25.112 14298.764 - 14358.342: 98.0044% ( 3) 00:09:25.112 14358.342 - 14417.920: 98.0214% ( 2) 00:09:25.112 14417.920 - 14477.498: 98.0469% ( 3) 00:09:25.112 14477.498 - 14537.076: 98.0639% ( 2) 00:09:25.112 14537.076 - 14596.655: 98.0724% ( 1) 00:09:25.112 14596.655 - 14656.233: 98.1573% ( 10) 00:09:25.112 14656.233 - 14715.811: 98.2082% ( 6) 00:09:25.112 14715.811 - 14775.389: 98.2592% ( 6) 00:09:25.112 14775.389 - 14834.967: 98.3101% ( 6) 00:09:25.112 14834.967 - 14894.545: 98.3611% ( 6) 00:09:25.112 14894.545 - 14954.124: 98.4120% ( 6) 00:09:25.112 14954.124 - 15013.702: 98.4715% ( 7) 00:09:25.112 15013.702 - 15073.280: 98.5309% ( 7) 00:09:25.112 15073.280 - 15132.858: 98.5819% ( 6) 00:09:25.112 15132.858 - 15192.436: 98.6328% ( 6) 00:09:25.112 15192.436 - 15252.015: 98.6753% ( 5) 00:09:25.112 15252.015 - 15371.171: 98.7432% ( 8) 00:09:25.112 15371.171 - 15490.327: 98.7942% ( 6) 00:09:25.112 15490.327 - 15609.484: 98.8536% ( 7) 00:09:25.112 15609.484 - 15728.640: 98.9046% ( 6) 00:09:25.112 15728.640 - 15847.796: 98.9130% ( 1) 00:09:25.112 16801.047 - 16920.204: 98.9385% ( 3) 00:09:25.112 16920.204 - 17039.360: 99.0234% ( 10) 00:09:25.112 17039.360 - 17158.516: 99.1084% ( 10) 00:09:25.112 17158.516 - 17277.673: 99.1508% ( 5) 00:09:25.112 17277.673 - 17396.829: 99.1933% ( 5) 00:09:25.112 17396.829 - 17515.985: 99.2357% ( 5) 00:09:25.112 17515.985 - 17635.142: 99.2782% ( 5) 00:09:25.112 17635.142 - 17754.298: 99.3376% ( 7) 00:09:25.112 17754.298 - 17873.455: 99.3971% ( 7) 00:09:25.112 17873.455 - 17992.611: 99.4480% ( 6) 00:09:25.112 17992.611 - 18111.767: 99.4565% ( 1) 00:09:25.112 22163.084 - 22282.240: 99.4905% ( 4) 00:09:25.112 22282.240 - 22401.396: 99.5499% ( 7) 00:09:25.112 22401.396 - 22520.553: 99.6009% ( 6) 00:09:25.112 22520.553 - 22639.709: 99.6518% ( 6) 00:09:25.112 22639.709 - 22758.865: 99.6943% ( 5) 00:09:25.112 22758.865 - 22878.022: 99.7537% ( 7) 00:09:25.112 22878.022 - 22997.178: 99.7962% ( 5) 00:09:25.112 22997.178 - 23116.335: 99.8556% ( 7) 00:09:25.112 23116.335 - 23235.491: 99.9066% ( 6) 00:09:25.112 23235.491 - 23354.647: 99.9575% ( 6) 00:09:25.112 23354.647 - 23473.804: 100.0000% ( 5) 00:09:25.112 00:09:25.112 18:16:11 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:25.112 00:09:25.112 real 0m2.575s 00:09:25.112 user 0m2.225s 00:09:25.112 sys 0m0.238s 00:09:25.112 18:16:11 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.112 18:16:11 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:25.112 ************************************ 00:09:25.112 END TEST nvme_perf 00:09:25.112 ************************************ 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:25.112 18:16:11 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.112 ************************************ 00:09:25.112 START TEST nvme_hello_world 00:09:25.112 ************************************ 00:09:25.112 18:16:11 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:25.112 Initializing NVMe Controllers 00:09:25.112 Attached to 0000:00:13.0 00:09:25.112 Namespace ID: 1 size: 1GB 00:09:25.112 Attached to 0000:00:10.0 00:09:25.112 Namespace ID: 1 size: 6GB 00:09:25.112 Attached to 0000:00:11.0 00:09:25.112 Namespace ID: 1 size: 5GB 00:09:25.112 Attached to 0000:00:12.0 00:09:25.112 Namespace ID: 1 size: 4GB 00:09:25.112 Namespace ID: 2 size: 4GB 00:09:25.112 Namespace ID: 3 size: 4GB 00:09:25.112 Initialization complete. 00:09:25.112 INFO: using host memory buffer for IO 00:09:25.112 Hello world! 00:09:25.112 INFO: using host memory buffer for IO 00:09:25.112 Hello world! 00:09:25.112 INFO: using host memory buffer for IO 00:09:25.112 Hello world! 00:09:25.112 INFO: using host memory buffer for IO 00:09:25.112 Hello world! 00:09:25.112 INFO: using host memory buffer for IO 00:09:25.112 Hello world! 00:09:25.112 INFO: using host memory buffer for IO 00:09:25.112 Hello world! 00:09:25.112 00:09:25.112 real 0m0.252s 00:09:25.112 user 0m0.093s 00:09:25.112 sys 0m0.114s 00:09:25.112 ************************************ 00:09:25.112 END TEST nvme_hello_world 00:09:25.112 ************************************ 00:09:25.112 18:16:11 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.112 18:16:11 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:25.112 18:16:11 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.112 18:16:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.112 ************************************ 00:09:25.112 START TEST nvme_sgl 00:09:25.112 ************************************ 00:09:25.112 18:16:11 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:25.371 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:25.371 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:25.371 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:25.371 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:25.371 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:25.371 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:25.371 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:25.371 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:25.371 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:25.371 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:25.371 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:25.371 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:25.371 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:25.371 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:25.371 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:25.631 NVMe Readv/Writev Request test 00:09:25.631 Attached to 0000:00:13.0 00:09:25.631 Attached to 0000:00:10.0 00:09:25.631 Attached to 0000:00:11.0 00:09:25.631 Attached to 0000:00:12.0 00:09:25.631 0000:00:10.0: build_io_request_2 test passed 00:09:25.631 0000:00:10.0: build_io_request_4 test passed 00:09:25.631 0000:00:10.0: build_io_request_5 test passed 00:09:25.631 0000:00:10.0: build_io_request_6 test passed 00:09:25.631 0000:00:10.0: build_io_request_7 test passed 00:09:25.631 0000:00:10.0: build_io_request_10 test passed 00:09:25.631 0000:00:11.0: build_io_request_2 test passed 00:09:25.631 0000:00:11.0: build_io_request_4 test passed 00:09:25.631 0000:00:11.0: build_io_request_5 test passed 00:09:25.631 0000:00:11.0: build_io_request_6 test passed 00:09:25.631 0000:00:11.0: build_io_request_7 test passed 00:09:25.631 0000:00:11.0: build_io_request_10 test passed 00:09:25.631 Cleaning up... 00:09:25.631 00:09:25.631 real 0m0.307s 00:09:25.631 user 0m0.161s 00:09:25.631 sys 0m0.108s 00:09:25.631 18:16:11 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.631 ************************************ 00:09:25.631 END TEST nvme_sgl 00:09:25.631 ************************************ 00:09:25.631 18:16:11 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:25.631 18:16:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:25.631 18:16:11 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:25.631 18:16:11 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.631 18:16:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.631 18:16:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.631 ************************************ 00:09:25.631 START TEST nvme_e2edp 00:09:25.631 ************************************ 00:09:25.631 18:16:11 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:25.890 NVMe Write/Read with End-to-End data protection test 00:09:25.890 Attached to 0000:00:13.0 00:09:25.890 Attached to 0000:00:10.0 00:09:25.890 Attached to 0000:00:11.0 00:09:25.890 Attached to 0000:00:12.0 00:09:25.890 Cleaning up... 00:09:25.890 00:09:25.890 real 0m0.257s 00:09:25.890 user 0m0.100s 00:09:25.890 sys 0m0.100s 00:09:25.890 18:16:12 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.890 ************************************ 00:09:25.890 END TEST nvme_e2edp 00:09:25.890 ************************************ 00:09:25.890 18:16:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:25.890 18:16:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:25.890 18:16:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:25.890 18:16:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.890 18:16:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.890 18:16:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.890 ************************************ 00:09:25.890 START TEST nvme_reserve 00:09:25.890 ************************************ 00:09:25.890 18:16:12 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:26.149 ===================================================== 00:09:26.149 NVMe Controller at PCI bus 0, device 19, function 0 00:09:26.149 ===================================================== 00:09:26.149 Reservations: Not Supported 00:09:26.149 ===================================================== 00:09:26.149 NVMe Controller at PCI bus 0, device 16, function 0 00:09:26.149 ===================================================== 00:09:26.149 Reservations: Not Supported 00:09:26.149 ===================================================== 00:09:26.149 NVMe Controller at PCI bus 0, device 17, function 0 00:09:26.149 ===================================================== 00:09:26.149 Reservations: Not Supported 00:09:26.149 ===================================================== 00:09:26.149 NVMe Controller at PCI bus 0, device 18, function 0 00:09:26.149 ===================================================== 00:09:26.149 Reservations: Not Supported 00:09:26.149 Reservation test passed 00:09:26.149 00:09:26.149 real 0m0.246s 00:09:26.149 user 0m0.101s 00:09:26.149 sys 0m0.100s 00:09:26.149 18:16:12 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.149 ************************************ 00:09:26.149 END TEST nvme_reserve 00:09:26.149 ************************************ 00:09:26.149 18:16:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:26.149 18:16:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:26.149 18:16:12 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:26.149 18:16:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:26.150 18:16:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.150 18:16:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.150 ************************************ 00:09:26.150 START TEST nvme_err_injection 00:09:26.150 ************************************ 00:09:26.150 18:16:12 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:26.409 NVMe Error Injection test 00:09:26.409 Attached to 0000:00:13.0 00:09:26.409 Attached to 0000:00:10.0 00:09:26.409 Attached to 0000:00:11.0 00:09:26.409 Attached to 0000:00:12.0 00:09:26.409 0000:00:13.0: get features failed as expected 00:09:26.409 0000:00:10.0: get features failed as expected 00:09:26.409 0000:00:11.0: get features failed as expected 00:09:26.409 0000:00:12.0: get features failed as expected 00:09:26.409 0000:00:12.0: get features successfully as expected 00:09:26.409 0000:00:13.0: get features successfully as expected 00:09:26.409 0000:00:10.0: get features successfully as expected 00:09:26.409 0000:00:11.0: get features successfully as expected 00:09:26.409 0000:00:13.0: read failed as expected 00:09:26.409 0000:00:10.0: read failed as expected 00:09:26.409 0000:00:11.0: read failed as expected 00:09:26.409 0000:00:12.0: read failed as expected 00:09:26.409 0000:00:13.0: read successfully as expected 00:09:26.409 0000:00:10.0: read successfully as expected 00:09:26.409 0000:00:11.0: read successfully as expected 00:09:26.409 0000:00:12.0: read successfully as expected 00:09:26.409 Cleaning up... 00:09:26.409 00:09:26.409 real 0m0.265s 00:09:26.409 user 0m0.099s 00:09:26.409 sys 0m0.114s 00:09:26.409 18:16:12 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.409 18:16:12 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:26.409 ************************************ 00:09:26.409 END TEST nvme_err_injection 00:09:26.409 ************************************ 00:09:26.409 18:16:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:26.409 18:16:12 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:26.409 18:16:12 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:26.409 18:16:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.409 18:16:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.409 ************************************ 00:09:26.409 START TEST nvme_overhead 00:09:26.409 ************************************ 00:09:26.409 18:16:12 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:27.788 Initializing NVMe Controllers 00:09:27.788 Attached to 0000:00:13.0 00:09:27.788 Attached to 0000:00:10.0 00:09:27.788 Attached to 0000:00:11.0 00:09:27.788 Attached to 0000:00:12.0 00:09:27.788 Initialization complete. Launching workers. 00:09:27.788 submit (in ns) avg, min, max = 17174.3, 14081.8, 130602.3 00:09:27.788 complete (in ns) avg, min, max = 12147.4, 9140.0, 93940.0 00:09:27.788 00:09:27.788 Submit histogram 00:09:27.788 ================ 00:09:27.788 Range in us Cumulative Count 00:09:27.788 14.080 - 14.138: 0.0586% ( 5) 00:09:27.788 14.138 - 14.196: 0.1289% ( 6) 00:09:27.788 14.196 - 14.255: 0.1757% ( 4) 00:09:27.788 14.255 - 14.313: 0.3046% ( 11) 00:09:27.788 14.313 - 14.371: 0.5740% ( 23) 00:09:27.788 14.371 - 14.429: 1.1949% ( 53) 00:09:27.788 14.429 - 14.487: 2.6476% ( 124) 00:09:27.788 14.487 - 14.545: 4.6275% ( 169) 00:09:27.788 14.545 - 14.604: 7.0291% ( 205) 00:09:27.788 14.604 - 14.662: 10.4381% ( 291) 00:09:27.788 14.662 - 14.720: 13.9410% ( 299) 00:09:27.788 14.720 - 14.778: 16.8580% ( 249) 00:09:27.788 14.778 - 14.836: 20.0094% ( 269) 00:09:27.788 14.836 - 14.895: 22.8561% ( 243) 00:09:27.788 14.895 - 15.011: 30.1664% ( 624) 00:09:27.788 15.011 - 15.127: 37.2657% ( 606) 00:09:27.788 15.127 - 15.244: 43.3107% ( 516) 00:09:27.788 15.244 - 15.360: 47.5281% ( 360) 00:09:27.788 15.360 - 15.476: 51.4292% ( 333) 00:09:27.788 15.476 - 15.593: 55.2952% ( 330) 00:09:27.788 15.593 - 15.709: 58.2123% ( 249) 00:09:27.788 15.709 - 15.825: 60.4850% ( 194) 00:09:27.788 15.825 - 15.942: 62.3711% ( 161) 00:09:27.788 15.942 - 16.058: 64.0112% ( 140) 00:09:27.788 16.058 - 16.175: 65.2648% ( 107) 00:09:27.788 16.175 - 16.291: 66.4597% ( 102) 00:09:27.788 16.291 - 16.407: 67.3735% ( 78) 00:09:27.788 16.407 - 16.524: 68.0764% ( 60) 00:09:27.788 16.524 - 16.640: 68.5918% ( 44) 00:09:27.788 16.640 - 16.756: 69.0370% ( 38) 00:09:27.788 16.756 - 16.873: 69.3299% ( 25) 00:09:27.788 16.873 - 16.989: 69.5759% ( 21) 00:09:27.788 16.989 - 17.105: 69.7985% ( 19) 00:09:27.788 17.105 - 17.222: 69.9274% ( 11) 00:09:27.788 17.222 - 17.338: 70.0562% ( 11) 00:09:27.788 17.338 - 17.455: 70.1617% ( 9) 00:09:27.788 17.455 - 17.571: 70.2554% ( 8) 00:09:27.788 17.571 - 17.687: 70.3608% ( 9) 00:09:27.788 17.687 - 17.804: 70.4545% ( 8) 00:09:27.788 17.804 - 17.920: 70.6654% ( 18) 00:09:27.788 17.920 - 18.036: 71.6026% ( 80) 00:09:27.788 18.036 - 18.153: 74.0159% ( 206) 00:09:27.788 18.153 - 18.269: 78.1514% ( 353) 00:09:27.788 18.269 - 18.385: 81.5956% ( 294) 00:09:27.788 18.385 - 18.502: 83.7746% ( 186) 00:09:27.788 18.502 - 18.618: 84.7821% ( 86) 00:09:27.788 18.618 - 18.735: 85.4733% ( 59) 00:09:27.788 18.735 - 18.851: 85.8833% ( 35) 00:09:27.788 18.851 - 18.967: 86.5159% ( 54) 00:09:27.788 18.967 - 19.084: 86.9377% ( 36) 00:09:27.788 19.084 - 19.200: 87.4883% ( 47) 00:09:27.788 19.200 - 19.316: 87.9803% ( 42) 00:09:27.788 19.316 - 19.433: 88.3318% ( 30) 00:09:27.788 19.433 - 19.549: 88.6246% ( 25) 00:09:27.788 19.549 - 19.665: 88.8004% ( 15) 00:09:27.788 19.665 - 19.782: 88.8824% ( 7) 00:09:27.788 19.782 - 19.898: 88.9878% ( 9) 00:09:27.788 19.898 - 20.015: 89.1518% ( 14) 00:09:27.788 20.015 - 20.131: 89.3276% ( 15) 00:09:27.788 20.131 - 20.247: 89.4213% ( 8) 00:09:27.788 20.247 - 20.364: 89.6321% ( 18) 00:09:27.788 20.364 - 20.480: 89.7844% ( 13) 00:09:27.788 20.480 - 20.596: 89.9485% ( 14) 00:09:27.788 20.596 - 20.713: 90.1125% ( 14) 00:09:27.788 20.713 - 20.829: 90.2648% ( 13) 00:09:27.788 20.829 - 20.945: 90.4053% ( 12) 00:09:27.788 20.945 - 21.062: 90.5694% ( 14) 00:09:27.788 21.062 - 21.178: 90.7568% ( 16) 00:09:27.788 21.178 - 21.295: 90.8857% ( 11) 00:09:27.788 21.295 - 21.411: 91.0262% ( 12) 00:09:27.788 21.411 - 21.527: 91.1317% ( 9) 00:09:27.788 21.527 - 21.644: 91.2137% ( 7) 00:09:27.788 21.644 - 21.760: 91.2840% ( 6) 00:09:27.788 21.760 - 21.876: 91.3777% ( 8) 00:09:27.788 21.876 - 21.993: 91.4948% ( 10) 00:09:27.788 21.993 - 22.109: 91.6003% ( 9) 00:09:27.788 22.109 - 22.225: 91.6589% ( 5) 00:09:27.788 22.225 - 22.342: 91.7291% ( 6) 00:09:27.788 22.342 - 22.458: 91.8346% ( 9) 00:09:27.788 22.458 - 22.575: 91.9166% ( 7) 00:09:27.788 22.575 - 22.691: 91.9986% ( 7) 00:09:27.788 22.691 - 22.807: 92.1157% ( 10) 00:09:27.788 22.807 - 22.924: 92.1743% ( 5) 00:09:27.788 22.924 - 23.040: 92.2446% ( 6) 00:09:27.789 23.040 - 23.156: 92.3383% ( 8) 00:09:27.789 23.156 - 23.273: 92.4203% ( 7) 00:09:27.789 23.273 - 23.389: 92.5141% ( 8) 00:09:27.789 23.389 - 23.505: 92.5726% ( 5) 00:09:27.789 23.505 - 23.622: 92.6312% ( 5) 00:09:27.789 23.622 - 23.738: 92.6898% ( 5) 00:09:27.789 23.738 - 23.855: 92.7601% ( 6) 00:09:27.789 23.855 - 23.971: 92.8421% ( 7) 00:09:27.789 23.971 - 24.087: 92.8655% ( 2) 00:09:27.789 24.087 - 24.204: 92.9358% ( 6) 00:09:27.789 24.204 - 24.320: 92.9944% ( 5) 00:09:27.789 24.320 - 24.436: 93.0412% ( 4) 00:09:27.789 24.436 - 24.553: 93.1701% ( 11) 00:09:27.789 24.553 - 24.669: 93.2052% ( 3) 00:09:27.789 24.669 - 24.785: 93.2990% ( 8) 00:09:27.789 24.785 - 24.902: 93.3927% ( 8) 00:09:27.789 24.902 - 25.018: 93.4864% ( 8) 00:09:27.789 25.018 - 25.135: 93.5333% ( 4) 00:09:27.789 25.135 - 25.251: 93.5567% ( 2) 00:09:27.789 25.251 - 25.367: 93.5918% ( 3) 00:09:27.789 25.367 - 25.484: 93.6153% ( 2) 00:09:27.789 25.484 - 25.600: 93.6387% ( 2) 00:09:27.789 25.600 - 25.716: 93.6621% ( 2) 00:09:27.789 25.716 - 25.833: 93.7090% ( 4) 00:09:27.789 25.833 - 25.949: 93.7324% ( 2) 00:09:27.789 25.949 - 26.065: 93.7793% ( 4) 00:09:27.789 26.065 - 26.182: 93.7910% ( 1) 00:09:27.789 26.182 - 26.298: 93.8144% ( 2) 00:09:27.789 26.298 - 26.415: 93.8379% ( 2) 00:09:27.789 26.415 - 26.531: 93.9199% ( 7) 00:09:27.789 26.531 - 26.647: 94.0253% ( 9) 00:09:27.789 26.647 - 26.764: 94.0722% ( 4) 00:09:27.789 26.764 - 26.880: 94.0956% ( 2) 00:09:27.789 26.880 - 26.996: 94.1425% ( 4) 00:09:27.789 26.996 - 27.113: 94.2010% ( 5) 00:09:27.789 27.113 - 27.229: 94.2596% ( 5) 00:09:27.789 27.229 - 27.345: 94.2948% ( 3) 00:09:27.789 27.345 - 27.462: 94.3182% ( 2) 00:09:27.789 27.462 - 27.578: 94.3416% ( 2) 00:09:27.789 27.578 - 27.695: 94.3650% ( 2) 00:09:27.789 27.695 - 27.811: 94.4119% ( 4) 00:09:27.789 27.811 - 27.927: 94.4822% ( 6) 00:09:27.789 27.927 - 28.044: 94.5291% ( 4) 00:09:27.789 28.044 - 28.160: 94.5993% ( 6) 00:09:27.789 28.160 - 28.276: 94.6579% ( 5) 00:09:27.789 28.276 - 28.393: 94.6696% ( 1) 00:09:27.789 28.509 - 28.625: 94.6931% ( 2) 00:09:27.789 28.625 - 28.742: 94.7282% ( 3) 00:09:27.789 28.742 - 28.858: 94.7751% ( 4) 00:09:27.789 28.858 - 28.975: 94.7868% ( 1) 00:09:27.789 28.975 - 29.091: 94.8219% ( 3) 00:09:27.789 29.091 - 29.207: 94.8571% ( 3) 00:09:27.789 29.207 - 29.324: 94.9625% ( 9) 00:09:27.789 29.324 - 29.440: 95.0211% ( 5) 00:09:27.789 29.440 - 29.556: 95.1382% ( 10) 00:09:27.789 29.556 - 29.673: 95.3257% ( 16) 00:09:27.789 29.673 - 29.789: 95.5951% ( 23) 00:09:27.789 29.789 - 30.022: 96.1575% ( 48) 00:09:27.789 30.022 - 30.255: 96.9892% ( 71) 00:09:27.789 30.255 - 30.487: 97.6101% ( 53) 00:09:27.789 30.487 - 30.720: 98.0084% ( 34) 00:09:27.789 30.720 - 30.953: 98.2076% ( 17) 00:09:27.789 30.953 - 31.185: 98.4419% ( 20) 00:09:27.789 31.185 - 31.418: 98.6528% ( 18) 00:09:27.789 31.418 - 31.651: 98.8636% ( 18) 00:09:27.789 31.651 - 31.884: 98.9105% ( 4) 00:09:27.789 31.884 - 32.116: 98.9222% ( 1) 00:09:27.789 32.116 - 32.349: 98.9925% ( 6) 00:09:27.789 32.349 - 32.582: 99.0511% ( 5) 00:09:27.789 32.582 - 32.815: 99.0979% ( 4) 00:09:27.789 32.815 - 33.047: 99.1214% ( 2) 00:09:27.789 33.047 - 33.280: 99.1331% ( 1) 00:09:27.789 33.513 - 33.745: 99.1565% ( 2) 00:09:27.789 34.211 - 34.444: 99.1682% ( 1) 00:09:27.789 34.444 - 34.676: 99.1799% ( 1) 00:09:27.789 34.676 - 34.909: 99.2034% ( 2) 00:09:27.789 34.909 - 35.142: 99.2385% ( 3) 00:09:27.789 35.142 - 35.375: 99.2854% ( 4) 00:09:27.789 35.375 - 35.607: 99.3088% ( 2) 00:09:27.789 35.607 - 35.840: 99.3205% ( 1) 00:09:27.789 35.840 - 36.073: 99.3557% ( 3) 00:09:27.789 36.305 - 36.538: 99.4025% ( 4) 00:09:27.789 36.538 - 36.771: 99.4142% ( 1) 00:09:27.789 36.771 - 37.004: 99.4377% ( 2) 00:09:27.789 37.004 - 37.236: 99.4611% ( 2) 00:09:27.789 37.236 - 37.469: 99.5080% ( 4) 00:09:27.789 37.469 - 37.702: 99.5197% ( 1) 00:09:27.789 37.702 - 37.935: 99.5431% ( 2) 00:09:27.789 37.935 - 38.167: 99.5548% ( 1) 00:09:27.789 38.167 - 38.400: 99.5783% ( 2) 00:09:27.789 38.400 - 38.633: 99.6017% ( 2) 00:09:27.789 38.633 - 38.865: 99.6251% ( 2) 00:09:27.789 39.331 - 39.564: 99.6368% ( 1) 00:09:27.789 39.796 - 40.029: 99.6603% ( 2) 00:09:27.789 40.029 - 40.262: 99.6720% ( 1) 00:09:27.789 40.495 - 40.727: 99.6954% ( 2) 00:09:27.789 41.891 - 42.124: 99.7188% ( 2) 00:09:27.789 42.589 - 42.822: 99.7423% ( 2) 00:09:27.789 43.985 - 44.218: 99.7540% ( 1) 00:09:27.789 44.218 - 44.451: 99.7657% ( 1) 00:09:27.789 45.149 - 45.382: 99.7774% ( 1) 00:09:27.789 45.382 - 45.615: 99.7891% ( 1) 00:09:27.789 45.615 - 45.847: 99.8126% ( 2) 00:09:27.789 46.080 - 46.313: 99.8360% ( 2) 00:09:27.789 46.313 - 46.545: 99.8477% ( 1) 00:09:27.789 46.545 - 46.778: 99.8946% ( 4) 00:09:27.789 47.011 - 47.244: 99.9063% ( 1) 00:09:27.789 50.269 - 50.502: 99.9180% ( 1) 00:09:27.789 51.433 - 51.665: 99.9297% ( 1) 00:09:27.789 52.131 - 52.364: 99.9414% ( 1) 00:09:27.789 55.156 - 55.389: 99.9531% ( 1) 00:09:27.789 56.320 - 56.553: 99.9649% ( 1) 00:09:27.789 78.662 - 79.127: 99.9766% ( 1) 00:09:27.789 114.967 - 115.433: 99.9883% ( 1) 00:09:27.789 130.327 - 131.258: 100.0000% ( 1) 00:09:27.789 00:09:27.789 Complete histogram 00:09:27.789 ================== 00:09:27.789 Range in us Cumulative Count 00:09:27.789 9.135 - 9.193: 0.0234% ( 2) 00:09:27.789 9.193 - 9.251: 0.0469% ( 2) 00:09:27.789 9.251 - 9.309: 0.1757% ( 11) 00:09:27.789 9.309 - 9.367: 0.4452% ( 23) 00:09:27.789 9.367 - 9.425: 1.2184% ( 66) 00:09:27.789 9.425 - 9.484: 2.2844% ( 91) 00:09:27.789 9.484 - 9.542: 3.9480% ( 142) 00:09:27.789 9.542 - 9.600: 5.9630% ( 172) 00:09:27.789 9.600 - 9.658: 7.9428% ( 169) 00:09:27.789 9.658 - 9.716: 10.2390% ( 196) 00:09:27.789 9.716 - 9.775: 13.1795% ( 251) 00:09:27.789 9.775 - 9.833: 17.1275% ( 337) 00:09:27.789 9.833 - 9.891: 21.0637% ( 336) 00:09:27.789 9.891 - 9.949: 25.3983% ( 370) 00:09:27.789 9.949 - 10.007: 29.4400% ( 345) 00:09:27.789 10.007 - 10.065: 33.3529% ( 334) 00:09:27.789 10.065 - 10.124: 37.2188% ( 330) 00:09:27.789 10.124 - 10.182: 41.2020% ( 340) 00:09:27.789 10.182 - 10.240: 44.9274% ( 318) 00:09:27.789 10.240 - 10.298: 48.1139% ( 272) 00:09:27.789 10.298 - 10.356: 51.2418% ( 267) 00:09:27.789 10.356 - 10.415: 54.0886% ( 243) 00:09:27.789 10.415 - 10.473: 55.8810% ( 153) 00:09:27.789 10.473 - 10.531: 57.2516% ( 117) 00:09:27.789 10.531 - 10.589: 58.3880% ( 97) 00:09:27.789 10.589 - 10.647: 59.2666% ( 75) 00:09:27.789 10.647 - 10.705: 59.7352% ( 40) 00:09:27.789 10.705 - 10.764: 60.1218% ( 33) 00:09:27.789 10.764 - 10.822: 60.6607% ( 46) 00:09:27.789 10.822 - 10.880: 60.9185% ( 22) 00:09:27.789 10.880 - 10.938: 61.2699% ( 30) 00:09:27.789 10.938 - 10.996: 61.7502% ( 41) 00:09:27.789 10.996 - 11.055: 62.2540% ( 43) 00:09:27.789 11.055 - 11.113: 62.7460% ( 42) 00:09:27.789 11.113 - 11.171: 63.4606% ( 61) 00:09:27.789 11.171 - 11.229: 64.0698% ( 52) 00:09:27.789 11.229 - 11.287: 64.5501% ( 41) 00:09:27.789 11.287 - 11.345: 65.0070% ( 39) 00:09:27.789 11.345 - 11.404: 65.5108% ( 43) 00:09:27.789 11.404 - 11.462: 65.9325% ( 36) 00:09:27.789 11.462 - 11.520: 66.3308% ( 34) 00:09:27.789 11.520 - 11.578: 66.6354% ( 26) 00:09:27.789 11.578 - 11.636: 66.9517% ( 27) 00:09:27.789 11.636 - 11.695: 67.2329% ( 24) 00:09:27.789 11.695 - 11.753: 67.3735% ( 12) 00:09:27.789 11.753 - 11.811: 67.4555% ( 7) 00:09:27.789 11.811 - 11.869: 67.5141% ( 5) 00:09:27.789 11.869 - 11.927: 67.7249% ( 18) 00:09:27.789 11.927 - 11.985: 68.0764% ( 30) 00:09:27.789 11.985 - 12.044: 68.4864% ( 35) 00:09:27.789 12.044 - 12.102: 69.4002% ( 78) 00:09:27.789 12.102 - 12.160: 70.7709% ( 117) 00:09:27.789 12.160 - 12.218: 72.7038% ( 165) 00:09:27.789 12.218 - 12.276: 74.9063% ( 188) 00:09:27.789 12.276 - 12.335: 77.0970% ( 187) 00:09:27.789 12.335 - 12.393: 79.2409% ( 183) 00:09:27.789 12.393 - 12.451: 80.9630% ( 147) 00:09:27.789 12.451 - 12.509: 82.3688% ( 120) 00:09:27.789 12.509 - 12.567: 83.3294% ( 82) 00:09:27.789 12.567 - 12.625: 84.0206% ( 59) 00:09:27.789 12.625 - 12.684: 84.4306% ( 35) 00:09:27.789 12.684 - 12.742: 84.6532% ( 19) 00:09:27.789 12.742 - 12.800: 84.8407% ( 16) 00:09:27.789 12.800 - 12.858: 85.0515% ( 18) 00:09:27.789 12.858 - 12.916: 85.2741% ( 19) 00:09:27.789 12.916 - 12.975: 85.3679% ( 8) 00:09:27.789 12.975 - 13.033: 85.6022% ( 20) 00:09:27.789 13.033 - 13.091: 85.8247% ( 19) 00:09:27.789 13.091 - 13.149: 86.0473% ( 19) 00:09:27.789 13.149 - 13.207: 86.2231% ( 15) 00:09:27.789 13.207 - 13.265: 86.3871% ( 14) 00:09:27.789 13.265 - 13.324: 86.5042% ( 10) 00:09:27.789 13.324 - 13.382: 86.5745% ( 6) 00:09:27.789 13.382 - 13.440: 86.6917% ( 10) 00:09:27.789 13.440 - 13.498: 86.8205% ( 11) 00:09:27.789 13.498 - 13.556: 87.1134% ( 25) 00:09:27.790 13.556 - 13.615: 87.3126% ( 17) 00:09:27.790 13.615 - 13.673: 87.6640% ( 30) 00:09:27.790 13.673 - 13.731: 87.8749% ( 18) 00:09:27.790 13.731 - 13.789: 88.0272% ( 13) 00:09:27.790 13.789 - 13.847: 88.1912% ( 14) 00:09:27.790 13.847 - 13.905: 88.2732% ( 7) 00:09:27.790 13.905 - 13.964: 88.3318% ( 5) 00:09:27.790 13.964 - 14.022: 88.4606% ( 11) 00:09:27.790 14.022 - 14.080: 88.5075% ( 4) 00:09:27.790 14.080 - 14.138: 88.5895% ( 7) 00:09:27.790 14.196 - 14.255: 88.6246% ( 3) 00:09:27.790 14.255 - 14.313: 88.6481% ( 2) 00:09:27.790 14.313 - 14.371: 88.6598% ( 1) 00:09:27.790 14.371 - 14.429: 88.7067% ( 4) 00:09:27.790 14.429 - 14.487: 88.7301% ( 2) 00:09:27.790 14.487 - 14.545: 88.7887% ( 5) 00:09:27.790 14.545 - 14.604: 88.8238% ( 3) 00:09:27.790 14.604 - 14.662: 88.8472% ( 2) 00:09:27.790 14.662 - 14.720: 88.8707% ( 2) 00:09:27.790 14.720 - 14.778: 88.8824% ( 1) 00:09:27.790 14.836 - 14.895: 88.9175% ( 3) 00:09:27.790 14.895 - 15.011: 88.9995% ( 7) 00:09:27.790 15.011 - 15.127: 89.0347% ( 3) 00:09:27.790 15.127 - 15.244: 89.1518% ( 10) 00:09:27.790 15.244 - 15.360: 89.3393% ( 16) 00:09:27.790 15.360 - 15.476: 89.4916% ( 13) 00:09:27.790 15.476 - 15.593: 89.5736% ( 7) 00:09:27.790 15.593 - 15.709: 89.7024% ( 11) 00:09:27.790 15.709 - 15.825: 89.7610% ( 5) 00:09:27.790 15.825 - 15.942: 89.7727% ( 1) 00:09:27.790 15.942 - 16.058: 89.8079% ( 3) 00:09:27.790 16.058 - 16.175: 89.8196% ( 1) 00:09:27.790 16.175 - 16.291: 89.8430% ( 2) 00:09:27.790 16.291 - 16.407: 89.8782% ( 3) 00:09:27.790 16.407 - 16.524: 89.9485% ( 6) 00:09:27.790 16.524 - 16.640: 90.0539% ( 9) 00:09:27.790 16.640 - 16.756: 90.1125% ( 5) 00:09:27.790 16.756 - 16.873: 90.2179% ( 9) 00:09:27.790 16.873 - 16.989: 90.2648% ( 4) 00:09:27.790 16.989 - 17.105: 90.3819% ( 10) 00:09:27.790 17.105 - 17.222: 90.5225% ( 12) 00:09:27.790 17.222 - 17.338: 90.5811% ( 5) 00:09:27.790 17.338 - 17.455: 90.6631% ( 7) 00:09:27.790 17.455 - 17.571: 90.8037% ( 12) 00:09:27.790 17.571 - 17.687: 90.8974% ( 8) 00:09:27.790 17.687 - 17.804: 91.0145% ( 10) 00:09:27.790 17.804 - 17.920: 91.1082% ( 8) 00:09:27.790 17.920 - 18.036: 91.2137% ( 9) 00:09:27.790 18.036 - 18.153: 91.2957% ( 7) 00:09:27.790 18.153 - 18.269: 91.3543% ( 5) 00:09:27.790 18.269 - 18.385: 91.4128% ( 5) 00:09:27.790 18.385 - 18.502: 91.4714% ( 5) 00:09:27.790 18.502 - 18.618: 91.5769% ( 9) 00:09:27.790 18.618 - 18.735: 91.6589% ( 7) 00:09:27.790 18.735 - 18.851: 91.7526% ( 8) 00:09:27.790 18.851 - 18.967: 91.8112% ( 5) 00:09:27.790 18.967 - 19.084: 91.9166% ( 9) 00:09:27.790 19.084 - 19.200: 91.9634% ( 4) 00:09:27.790 19.200 - 19.316: 92.0220% ( 5) 00:09:27.790 19.316 - 19.433: 92.0455% ( 2) 00:09:27.790 19.433 - 19.549: 92.0572% ( 1) 00:09:27.790 19.549 - 19.665: 92.0923% ( 3) 00:09:27.790 19.665 - 19.782: 92.1157% ( 2) 00:09:27.790 19.782 - 19.898: 92.1392% ( 2) 00:09:27.790 19.898 - 20.015: 92.1743% ( 3) 00:09:27.790 20.015 - 20.131: 92.2095% ( 3) 00:09:27.790 20.131 - 20.247: 92.2329% ( 2) 00:09:27.790 20.247 - 20.364: 92.2798% ( 4) 00:09:27.790 20.364 - 20.480: 92.2915% ( 1) 00:09:27.790 20.480 - 20.596: 92.3149% ( 2) 00:09:27.790 20.596 - 20.713: 92.3266% ( 1) 00:09:27.790 20.713 - 20.829: 92.3618% ( 3) 00:09:27.790 20.829 - 20.945: 92.3735% ( 1) 00:09:27.790 20.945 - 21.062: 92.3969% ( 2) 00:09:27.790 21.062 - 21.178: 92.4086% ( 1) 00:09:27.790 21.178 - 21.295: 92.4321% ( 2) 00:09:27.790 21.295 - 21.411: 92.4789% ( 4) 00:09:27.790 21.411 - 21.527: 92.4906% ( 1) 00:09:27.790 21.527 - 21.644: 92.5492% ( 5) 00:09:27.790 21.644 - 21.760: 92.5961% ( 4) 00:09:27.790 21.760 - 21.876: 92.6312% ( 3) 00:09:27.790 21.876 - 21.993: 92.6664% ( 3) 00:09:27.790 21.993 - 22.109: 92.7249% ( 5) 00:09:27.790 22.109 - 22.225: 92.7601% ( 3) 00:09:27.790 22.225 - 22.342: 92.8069% ( 4) 00:09:27.790 22.342 - 22.458: 92.8772% ( 6) 00:09:27.790 22.458 - 22.575: 92.9241% ( 4) 00:09:27.790 22.575 - 22.691: 92.9475% ( 2) 00:09:27.790 22.691 - 22.807: 92.9592% ( 1) 00:09:27.790 22.807 - 22.924: 92.9944% ( 3) 00:09:27.790 22.924 - 23.040: 93.0178% ( 2) 00:09:27.790 23.156 - 23.273: 93.0295% ( 1) 00:09:27.790 23.273 - 23.389: 93.0412% ( 1) 00:09:27.790 23.389 - 23.505: 93.0530% ( 1) 00:09:27.790 23.505 - 23.622: 93.0764% ( 2) 00:09:27.790 23.738 - 23.855: 93.1701% ( 8) 00:09:27.790 23.855 - 23.971: 93.2170% ( 4) 00:09:27.790 23.971 - 24.087: 93.2873% ( 6) 00:09:27.790 24.087 - 24.204: 93.3341% ( 4) 00:09:27.790 24.204 - 24.320: 93.4630% ( 11) 00:09:27.790 24.320 - 24.436: 93.7090% ( 21) 00:09:27.790 24.436 - 24.553: 94.0487% ( 29) 00:09:27.790 24.553 - 24.669: 94.5291% ( 41) 00:09:27.790 24.669 - 24.785: 94.9508% ( 36) 00:09:27.790 24.785 - 24.902: 95.4780% ( 45) 00:09:27.790 24.902 - 25.018: 95.9700% ( 42) 00:09:27.790 25.018 - 25.135: 96.4738% ( 43) 00:09:27.790 25.135 - 25.251: 97.0361% ( 48) 00:09:27.790 25.251 - 25.367: 97.4461% ( 35) 00:09:27.790 25.367 - 25.484: 97.6570% ( 18) 00:09:27.790 25.484 - 25.600: 97.9147% ( 22) 00:09:27.790 25.600 - 25.716: 97.9967% ( 7) 00:09:27.790 25.716 - 25.833: 98.1373% ( 12) 00:09:27.790 25.833 - 25.949: 98.2310% ( 8) 00:09:27.790 25.949 - 26.065: 98.3247% ( 8) 00:09:27.790 26.065 - 26.182: 98.4185% ( 8) 00:09:27.790 26.182 - 26.298: 98.5239% ( 9) 00:09:27.790 26.298 - 26.415: 98.6059% ( 7) 00:09:27.790 26.415 - 26.531: 98.7113% ( 9) 00:09:27.790 26.531 - 26.647: 98.8051% ( 8) 00:09:27.790 26.647 - 26.764: 98.8168% ( 1) 00:09:27.790 26.764 - 26.880: 98.9105% ( 8) 00:09:27.790 26.880 - 26.996: 98.9339% ( 2) 00:09:27.790 26.996 - 27.113: 98.9925% ( 5) 00:09:27.790 27.113 - 27.229: 99.0628% ( 6) 00:09:27.790 27.229 - 27.345: 99.1097% ( 4) 00:09:27.790 27.345 - 27.462: 99.1331% ( 2) 00:09:27.790 27.462 - 27.578: 99.1565% ( 2) 00:09:27.790 27.695 - 27.811: 99.2034% ( 4) 00:09:27.790 27.811 - 27.927: 99.2151% ( 1) 00:09:27.790 27.927 - 28.044: 99.2385% ( 2) 00:09:27.790 28.625 - 28.742: 99.2502% ( 1) 00:09:27.790 29.091 - 29.207: 99.2619% ( 1) 00:09:27.790 29.440 - 29.556: 99.2737% ( 1) 00:09:27.790 29.673 - 29.789: 99.2854% ( 1) 00:09:27.790 30.720 - 30.953: 99.2971% ( 1) 00:09:27.790 31.185 - 31.418: 99.3088% ( 1) 00:09:27.790 31.418 - 31.651: 99.3322% ( 2) 00:09:27.790 31.651 - 31.884: 99.4025% ( 6) 00:09:27.790 31.884 - 32.116: 99.4260% ( 2) 00:09:27.790 32.116 - 32.349: 99.4611% ( 3) 00:09:27.790 32.349 - 32.582: 99.5197% ( 5) 00:09:27.790 32.582 - 32.815: 99.5314% ( 1) 00:09:27.790 32.815 - 33.047: 99.6017% ( 6) 00:09:27.790 33.047 - 33.280: 99.6134% ( 1) 00:09:27.790 33.280 - 33.513: 99.6485% ( 3) 00:09:27.790 33.513 - 33.745: 99.6603% ( 1) 00:09:27.790 33.745 - 33.978: 99.6837% ( 2) 00:09:27.790 33.978 - 34.211: 99.6954% ( 1) 00:09:27.790 34.444 - 34.676: 99.7071% ( 1) 00:09:27.790 34.676 - 34.909: 99.7188% ( 1) 00:09:27.790 35.142 - 35.375: 99.7306% ( 1) 00:09:27.790 35.375 - 35.607: 99.7423% ( 1) 00:09:27.790 35.607 - 35.840: 99.7540% ( 1) 00:09:27.790 35.840 - 36.073: 99.7657% ( 1) 00:09:27.790 37.004 - 37.236: 99.7891% ( 2) 00:09:27.790 37.469 - 37.702: 99.8008% ( 1) 00:09:27.790 39.564 - 39.796: 99.8126% ( 1) 00:09:27.790 39.796 - 40.029: 99.8243% ( 1) 00:09:27.790 40.029 - 40.262: 99.8360% ( 1) 00:09:27.790 40.495 - 40.727: 99.8477% ( 1) 00:09:27.790 40.727 - 40.960: 99.8711% ( 2) 00:09:27.790 40.960 - 41.193: 99.8828% ( 1) 00:09:27.790 42.124 - 42.356: 99.9063% ( 2) 00:09:27.790 43.055 - 43.287: 99.9180% ( 1) 00:09:27.790 51.898 - 52.131: 99.9297% ( 1) 00:09:27.790 78.662 - 79.127: 99.9414% ( 1) 00:09:27.790 82.851 - 83.316: 99.9531% ( 1) 00:09:27.790 85.178 - 85.644: 99.9649% ( 1) 00:09:27.790 85.644 - 86.109: 99.9766% ( 1) 00:09:27.790 87.971 - 88.436: 99.9883% ( 1) 00:09:27.790 93.556 - 94.022: 100.0000% ( 1) 00:09:27.790 00:09:27.790 00:09:27.790 real 0m1.261s 00:09:27.790 user 0m1.086s 00:09:27.790 sys 0m0.112s 00:09:27.790 18:16:14 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.790 18:16:14 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:27.790 ************************************ 00:09:27.790 END TEST nvme_overhead 00:09:27.790 ************************************ 00:09:27.790 18:16:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:27.790 18:16:14 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:27.790 18:16:14 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:27.790 18:16:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.790 18:16:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.790 ************************************ 00:09:27.790 START TEST nvme_arbitration 00:09:27.790 ************************************ 00:09:27.790 18:16:14 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:31.124 Initializing NVMe Controllers 00:09:31.124 Attached to 0000:00:13.0 00:09:31.124 Attached to 0000:00:10.0 00:09:31.124 Attached to 0000:00:11.0 00:09:31.124 Attached to 0000:00:12.0 00:09:31.124 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:09:31.124 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:09:31.124 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:09:31.124 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:31.124 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:31.124 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:31.124 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:31.124 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:31.124 Initialization complete. Launching workers. 00:09:31.124 Starting thread on core 1 with urgent priority queue 00:09:31.124 Starting thread on core 2 with urgent priority queue 00:09:31.124 Starting thread on core 3 with urgent priority queue 00:09:31.124 Starting thread on core 0 with urgent priority queue 00:09:31.124 QEMU NVMe Ctrl (12343 ) core 0: 5141.33 IO/s 19.45 secs/100000 ios 00:09:31.124 QEMU NVMe Ctrl (12342 ) core 0: 5141.33 IO/s 19.45 secs/100000 ios 00:09:31.124 QEMU NVMe Ctrl (12340 ) core 1: 5269.33 IO/s 18.98 secs/100000 ios 00:09:31.124 QEMU NVMe Ctrl (12342 ) core 1: 5269.33 IO/s 18.98 secs/100000 ios 00:09:31.124 QEMU NVMe Ctrl (12341 ) core 2: 5184.00 IO/s 19.29 secs/100000 ios 00:09:31.124 QEMU NVMe Ctrl (12342 ) core 3: 5248.00 IO/s 19.05 secs/100000 ios 00:09:31.124 ======================================================== 00:09:31.124 00:09:31.124 00:09:31.124 real 0m3.267s 00:09:31.124 user 0m9.047s 00:09:31.124 sys 0m0.127s 00:09:31.124 18:16:17 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.124 ************************************ 00:09:31.124 18:16:17 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:31.124 END TEST nvme_arbitration 00:09:31.124 ************************************ 00:09:31.124 18:16:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:31.124 18:16:17 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:31.124 18:16:17 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:31.124 18:16:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.124 18:16:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.124 ************************************ 00:09:31.124 START TEST nvme_single_aen 00:09:31.124 ************************************ 00:09:31.124 18:16:17 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:31.382 Asynchronous Event Request test 00:09:31.382 Attached to 0000:00:13.0 00:09:31.382 Attached to 0000:00:10.0 00:09:31.382 Attached to 0000:00:11.0 00:09:31.382 Attached to 0000:00:12.0 00:09:31.382 Reset controller to setup AER completions for this process 00:09:31.382 Registering asynchronous event callbacks... 00:09:31.382 Getting orig temperature thresholds of all controllers 00:09:31.382 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.382 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.382 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.382 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:31.382 Setting all controllers temperature threshold low to trigger AER 00:09:31.382 Waiting for all controllers temperature threshold to be set lower 00:09:31.382 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.382 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:31.382 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.382 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:31.382 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.382 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:31.382 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:31.382 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:31.382 Waiting for all controllers to trigger AER and reset threshold 00:09:31.382 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.382 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.382 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.382 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:31.382 Cleaning up... 00:09:31.382 00:09:31.382 real 0m0.262s 00:09:31.382 user 0m0.102s 00:09:31.382 sys 0m0.117s 00:09:31.382 ************************************ 00:09:31.382 END TEST nvme_single_aen 00:09:31.382 ************************************ 00:09:31.382 18:16:17 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.382 18:16:17 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:31.382 18:16:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:31.382 18:16:17 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:31.382 18:16:17 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.382 18:16:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.382 18:16:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.383 ************************************ 00:09:31.383 START TEST nvme_doorbell_aers 00:09:31.383 ************************************ 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:31.383 18:16:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:31.641 [2024-07-11 18:16:18.022979] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:09:41.611 Executing: test_write_invalid_db 00:09:41.611 Waiting for AER completion... 00:09:41.611 Failure: test_write_invalid_db 00:09:41.611 00:09:41.611 Executing: test_invalid_db_write_overflow_sq 00:09:41.611 Waiting for AER completion... 00:09:41.611 Failure: test_invalid_db_write_overflow_sq 00:09:41.611 00:09:41.611 Executing: test_invalid_db_write_overflow_cq 00:09:41.611 Waiting for AER completion... 00:09:41.611 Failure: test_invalid_db_write_overflow_cq 00:09:41.611 00:09:41.611 18:16:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:41.611 18:16:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:41.868 [2024-07-11 18:16:28.038805] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:09:51.885 Executing: test_write_invalid_db 00:09:51.885 Waiting for AER completion... 00:09:51.885 Failure: test_write_invalid_db 00:09:51.885 00:09:51.885 Executing: test_invalid_db_write_overflow_sq 00:09:51.885 Waiting for AER completion... 00:09:51.885 Failure: test_invalid_db_write_overflow_sq 00:09:51.885 00:09:51.885 Executing: test_invalid_db_write_overflow_cq 00:09:51.885 Waiting for AER completion... 00:09:51.886 Failure: test_invalid_db_write_overflow_cq 00:09:51.886 00:09:51.886 18:16:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:51.886 18:16:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:51.886 [2024-07-11 18:16:38.083419] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:02.062 Executing: test_write_invalid_db 00:10:02.062 Waiting for AER completion... 00:10:02.062 Failure: test_write_invalid_db 00:10:02.062 00:10:02.062 Executing: test_invalid_db_write_overflow_sq 00:10:02.062 Waiting for AER completion... 00:10:02.062 Failure: test_invalid_db_write_overflow_sq 00:10:02.062 00:10:02.062 Executing: test_invalid_db_write_overflow_cq 00:10:02.062 Waiting for AER completion... 00:10:02.062 Failure: test_invalid_db_write_overflow_cq 00:10:02.062 00:10:02.062 18:16:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:02.062 18:16:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:02.062 [2024-07-11 18:16:48.146608] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 Executing: test_write_invalid_db 00:10:12.060 Waiting for AER completion... 00:10:12.060 Failure: test_write_invalid_db 00:10:12.060 00:10:12.060 Executing: test_invalid_db_write_overflow_sq 00:10:12.060 Waiting for AER completion... 00:10:12.060 Failure: test_invalid_db_write_overflow_sq 00:10:12.060 00:10:12.060 Executing: test_invalid_db_write_overflow_cq 00:10:12.060 Waiting for AER completion... 00:10:12.060 Failure: test_invalid_db_write_overflow_cq 00:10:12.060 00:10:12.060 ************************************ 00:10:12.060 END TEST nvme_doorbell_aers 00:10:12.060 ************************************ 00:10:12.060 00:10:12.060 real 0m40.234s 00:10:12.060 user 0m34.247s 00:10:12.060 sys 0m5.632s 00:10:12.060 18:16:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.060 18:16:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:12.060 18:16:57 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:12.060 18:16:57 nvme -- nvme/nvme.sh@97 -- # uname 00:10:12.060 18:16:57 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:12.060 18:16:57 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:12.060 18:16:57 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:12.060 18:16:57 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.060 18:16:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.060 ************************************ 00:10:12.060 START TEST nvme_multi_aen 00:10:12.060 ************************************ 00:10:12.060 18:16:57 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:12.060 [2024-07-11 18:16:58.201607] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.201734] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.201790] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.203573] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.203627] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.203646] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.205077] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.205146] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.205165] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.206631] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.206694] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 [2024-07-11 18:16:58.206712] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80584) is not found. Dropping the request. 00:10:12.060 Child process pid: 81105 00:10:12.060 [Child] Asynchronous Event Request test 00:10:12.060 [Child] Attached to 0000:00:13.0 00:10:12.060 [Child] Attached to 0000:00:10.0 00:10:12.060 [Child] Attached to 0000:00:11.0 00:10:12.060 [Child] Attached to 0000:00:12.0 00:10:12.060 [Child] Registering asynchronous event callbacks... 00:10:12.060 [Child] Getting orig temperature thresholds of all controllers 00:10:12.060 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.060 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.060 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.060 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.060 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:12.060 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.060 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.060 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.060 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.060 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.060 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.060 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.060 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.060 [Child] Cleaning up... 00:10:12.319 Asynchronous Event Request test 00:10:12.319 Attached to 0000:00:13.0 00:10:12.319 Attached to 0000:00:10.0 00:10:12.319 Attached to 0000:00:11.0 00:10:12.319 Attached to 0000:00:12.0 00:10:12.319 Reset controller to setup AER completions for this process 00:10:12.319 Registering asynchronous event callbacks... 00:10:12.319 Getting orig temperature thresholds of all controllers 00:10:12.319 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.319 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.319 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.319 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:12.319 Setting all controllers temperature threshold low to trigger AER 00:10:12.319 Waiting for all controllers temperature threshold to be set lower 00:10:12.319 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.319 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:12.319 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.319 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:12.319 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.319 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:12.319 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:12.319 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:12.319 Waiting for all controllers to trigger AER and reset threshold 00:10:12.319 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.319 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.319 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.319 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:12.319 Cleaning up... 00:10:12.319 00:10:12.319 real 0m0.483s 00:10:12.319 user 0m0.175s 00:10:12.319 sys 0m0.208s 00:10:12.319 18:16:58 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.319 18:16:58 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:12.319 ************************************ 00:10:12.319 END TEST nvme_multi_aen 00:10:12.319 ************************************ 00:10:12.319 18:16:58 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:12.319 18:16:58 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:12.319 18:16:58 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:12.319 18:16:58 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.319 18:16:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.319 ************************************ 00:10:12.319 START TEST nvme_startup 00:10:12.319 ************************************ 00:10:12.319 18:16:58 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:12.577 Initializing NVMe Controllers 00:10:12.577 Attached to 0000:00:13.0 00:10:12.577 Attached to 0000:00:10.0 00:10:12.577 Attached to 0000:00:11.0 00:10:12.577 Attached to 0000:00:12.0 00:10:12.577 Initialization complete. 00:10:12.577 Time used:160641.344 (us). 00:10:12.577 00:10:12.577 real 0m0.232s 00:10:12.577 user 0m0.091s 00:10:12.577 sys 0m0.101s 00:10:12.577 18:16:58 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.577 ************************************ 00:10:12.577 END TEST nvme_startup 00:10:12.577 ************************************ 00:10:12.577 18:16:58 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:12.577 18:16:58 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:12.577 18:16:58 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:12.577 18:16:58 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:12.577 18:16:58 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.577 18:16:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.577 ************************************ 00:10:12.577 START TEST nvme_multi_secondary 00:10:12.577 ************************************ 00:10:12.577 18:16:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:10:12.577 18:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81150 00:10:12.577 18:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:12.577 18:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81151 00:10:12.577 18:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:12.577 18:16:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:15.859 Initializing NVMe Controllers 00:10:15.859 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:15.859 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:15.859 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:15.859 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:15.859 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:15.859 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:15.859 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:15.859 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:15.859 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:15.859 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:15.859 Initialization complete. Launching workers. 00:10:15.859 ======================================================== 00:10:15.859 Latency(us) 00:10:15.859 Device Information : IOPS MiB/s Average min max 00:10:15.859 PCIE (0000:00:13.0) NSID 1 from core 2: 2451.58 9.58 6525.89 1985.02 13871.20 00:10:15.859 PCIE (0000:00:10.0) NSID 1 from core 2: 2451.58 9.58 6524.51 1712.38 12965.40 00:10:15.859 PCIE (0000:00:11.0) NSID 1 from core 2: 2451.58 9.58 6526.58 1794.63 13269.55 00:10:15.859 PCIE (0000:00:12.0) NSID 1 from core 2: 2451.58 9.58 6526.49 1921.97 15638.03 00:10:15.859 PCIE (0000:00:12.0) NSID 2 from core 2: 2451.58 9.58 6526.94 1910.40 13890.94 00:10:15.859 PCIE (0000:00:12.0) NSID 3 from core 2: 2451.58 9.58 6527.28 1911.90 13942.21 00:10:15.859 ======================================================== 00:10:15.859 Total : 14709.45 57.46 6526.28 1712.38 15638.03 00:10:15.859 00:10:15.859 Initializing NVMe Controllers 00:10:15.859 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:15.859 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:15.859 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:15.859 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:15.859 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:15.859 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:15.859 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:15.859 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:15.859 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:15.859 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:15.859 Initialization complete. Launching workers. 00:10:15.859 ======================================================== 00:10:15.859 Latency(us) 00:10:15.859 Device Information : IOPS MiB/s Average min max 00:10:15.859 PCIE (0000:00:13.0) NSID 1 from core 1: 5156.83 20.14 3102.01 1546.73 5953.53 00:10:15.859 PCIE (0000:00:10.0) NSID 1 from core 1: 5156.83 20.14 3100.64 1511.74 6304.70 00:10:15.859 PCIE (0000:00:11.0) NSID 1 from core 1: 5156.83 20.14 3101.84 1397.06 6046.50 00:10:15.859 PCIE (0000:00:12.0) NSID 1 from core 1: 5156.83 20.14 3101.62 1397.21 6051.66 00:10:15.859 PCIE (0000:00:12.0) NSID 2 from core 1: 5156.83 20.14 3101.54 1428.69 6089.27 00:10:15.859 PCIE (0000:00:12.0) NSID 3 from core 1: 5156.83 20.14 3101.27 1016.24 6097.93 00:10:15.859 ======================================================== 00:10:15.859 Total : 30941.01 120.86 3101.49 1016.24 6304.70 00:10:15.859 00:10:16.116 18:17:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81150 00:10:18.017 Initializing NVMe Controllers 00:10:18.017 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:18.017 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:18.017 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:18.017 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:18.017 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:18.017 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:18.017 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:18.017 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:18.017 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:18.017 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:18.017 Initialization complete. Launching workers. 00:10:18.017 ======================================================== 00:10:18.017 Latency(us) 00:10:18.017 Device Information : IOPS MiB/s Average min max 00:10:18.017 PCIE (0000:00:13.0) NSID 1 from core 0: 7750.06 30.27 2063.96 993.71 6616.97 00:10:18.017 PCIE (0000:00:10.0) NSID 1 from core 0: 7750.06 30.27 2062.80 990.10 6190.64 00:10:18.017 PCIE (0000:00:11.0) NSID 1 from core 0: 7750.06 30.27 2063.78 1011.55 5838.09 00:10:18.017 PCIE (0000:00:12.0) NSID 1 from core 0: 7750.06 30.27 2063.67 761.21 6123.75 00:10:18.017 PCIE (0000:00:12.0) NSID 2 from core 0: 7750.06 30.27 2063.54 655.57 6692.72 00:10:18.017 PCIE (0000:00:12.0) NSID 3 from core 0: 7750.06 30.27 2063.43 485.57 7144.04 00:10:18.017 ======================================================== 00:10:18.017 Total : 46500.37 181.64 2063.53 485.57 7144.04 00:10:18.017 00:10:18.017 18:17:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81151 00:10:18.017 18:17:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81220 00:10:18.017 18:17:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:18.017 18:17:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81221 00:10:18.017 18:17:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:18.017 18:17:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:21.298 Initializing NVMe Controllers 00:10:21.298 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:21.298 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:21.298 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:21.298 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:21.298 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:21.298 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:21.298 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:21.298 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:21.298 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:21.298 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:21.298 Initialization complete. Launching workers. 00:10:21.298 ======================================================== 00:10:21.298 Latency(us) 00:10:21.298 Device Information : IOPS MiB/s Average min max 00:10:21.298 PCIE (0000:00:13.0) NSID 1 from core 0: 5541.12 21.64 2886.92 1057.26 7018.04 00:10:21.298 PCIE (0000:00:10.0) NSID 1 from core 0: 5541.12 21.64 2885.85 1000.31 7193.92 00:10:21.298 PCIE (0000:00:11.0) NSID 1 from core 0: 5541.12 21.64 2887.12 1045.86 6332.73 00:10:21.298 PCIE (0000:00:12.0) NSID 1 from core 0: 5541.12 21.64 2887.00 1031.54 6178.36 00:10:21.298 PCIE (0000:00:12.0) NSID 2 from core 0: 5541.12 21.64 2886.81 1045.34 6648.92 00:10:21.298 PCIE (0000:00:12.0) NSID 3 from core 0: 5541.12 21.64 2886.74 1028.26 7322.11 00:10:21.298 ======================================================== 00:10:21.298 Total : 33246.70 129.87 2886.74 1000.31 7322.11 00:10:21.298 00:10:21.298 Initializing NVMe Controllers 00:10:21.298 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:21.298 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:21.298 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:21.298 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:21.298 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:21.298 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:21.298 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:21.298 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:21.298 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:21.298 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:21.298 Initialization complete. Launching workers. 00:10:21.298 ======================================================== 00:10:21.298 Latency(us) 00:10:21.298 Device Information : IOPS MiB/s Average min max 00:10:21.298 PCIE (0000:00:13.0) NSID 1 from core 1: 5336.56 20.85 2997.48 1121.42 6058.32 00:10:21.298 PCIE (0000:00:10.0) NSID 1 from core 1: 5336.56 20.85 2996.01 1124.96 6571.97 00:10:21.298 PCIE (0000:00:11.0) NSID 1 from core 1: 5336.56 20.85 2997.06 1069.39 6523.63 00:10:21.298 PCIE (0000:00:12.0) NSID 1 from core 1: 5336.56 20.85 2996.78 790.19 6733.66 00:10:21.298 PCIE (0000:00:12.0) NSID 2 from core 1: 5336.56 20.85 2996.45 652.91 6626.72 00:10:21.298 PCIE (0000:00:12.0) NSID 3 from core 1: 5336.56 20.85 2996.12 528.00 6273.40 00:10:21.298 ======================================================== 00:10:21.298 Total : 32019.38 125.08 2996.65 528.00 6733.66 00:10:21.298 00:10:23.200 Initializing NVMe Controllers 00:10:23.200 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.200 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.200 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.200 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.200 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:23.200 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:23.200 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:23.200 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:23.200 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:23.200 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:23.200 Initialization complete. Launching workers. 00:10:23.200 ======================================================== 00:10:23.200 Latency(us) 00:10:23.200 Device Information : IOPS MiB/s Average min max 00:10:23.200 PCIE (0000:00:13.0) NSID 1 from core 2: 3468.52 13.55 4612.43 1059.19 13782.12 00:10:23.200 PCIE (0000:00:10.0) NSID 1 from core 2: 3468.52 13.55 4610.46 995.91 18205.07 00:10:23.200 PCIE (0000:00:11.0) NSID 1 from core 2: 3468.52 13.55 4612.12 1079.17 14164.39 00:10:23.201 PCIE (0000:00:12.0) NSID 1 from core 2: 3468.52 13.55 4611.80 810.18 13833.83 00:10:23.201 PCIE (0000:00:12.0) NSID 2 from core 2: 3468.52 13.55 4609.85 708.70 13526.72 00:10:23.201 PCIE (0000:00:12.0) NSID 3 from core 2: 3468.52 13.55 4607.96 548.52 16765.29 00:10:23.201 ======================================================== 00:10:23.201 Total : 20811.15 81.29 4610.77 548.52 18205.07 00:10:23.201 00:10:23.201 ************************************ 00:10:23.201 END TEST nvme_multi_secondary 00:10:23.201 ************************************ 00:10:23.201 18:17:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81220 00:10:23.201 18:17:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81221 00:10:23.201 00:10:23.201 real 0m10.539s 00:10:23.201 user 0m18.428s 00:10:23.201 sys 0m0.734s 00:10:23.201 18:17:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.201 18:17:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:23.201 18:17:09 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:23.201 18:17:09 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/80181 ]] 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1088 -- # kill 80181 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1089 -- # wait 80181 00:10:23.201 [2024-07-11 18:17:09.382482] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.382601] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.382641] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.382675] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.383639] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.383725] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.383759] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.383793] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.384682] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.384774] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.384809] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.384845] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.385661] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.385788] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.385826] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 [2024-07-11 18:17:09.385867] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:10:23.201 18:17:09 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.201 18:17:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.201 ************************************ 00:10:23.201 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:23.201 ************************************ 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:23.201 * Looking for test storage... 00:10:23.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:23.201 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:23.459 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:23.459 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:23.459 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:23.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.459 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:23.459 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:23.459 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81373 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81373 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 81373 ']' 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.460 18:17:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:23.460 [2024-07-11 18:17:09.765885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:23.460 [2024-07-11 18:17:09.766062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81373 ] 00:10:23.718 [2024-07-11 18:17:09.939139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.718 [2024-07-11 18:17:09.985220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.718 [2024-07-11 18:17:09.985429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.718 [2024-07-11 18:17:09.985557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.718 [2024-07-11 18:17:09.985610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:24.652 nvme0n1 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:24.652 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_38C06.txt 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:24.653 true 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720721830 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81396 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:24.653 18:17:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.567 [2024-07-11 18:17:12.803581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:26.567 [2024-07-11 18:17:12.803914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:26.567 [2024-07-11 18:17:12.803968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:26.567 [2024-07-11 18:17:12.803987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.567 [2024-07-11 18:17:12.806025] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:26.567 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81396 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81396 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81396 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_38C06.txt 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_38C06.txt 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81373 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 81373 ']' 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 81373 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81373 00:10:26.567 killing process with pid 81373 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81373' 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 81373 00:10:26.567 18:17:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 81373 00:10:26.825 18:17:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:26.825 18:17:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:26.825 00:10:26.825 real 0m3.727s 00:10:26.825 user 0m13.375s 00:10:26.825 sys 0m0.545s 00:10:26.825 ************************************ 00:10:26.825 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:26.825 18:17:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.825 18:17:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.825 ************************************ 00:10:27.083 18:17:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:27.083 18:17:13 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:27.083 18:17:13 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:27.083 18:17:13 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:27.083 18:17:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.083 18:17:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:27.083 ************************************ 00:10:27.083 START TEST nvme_fio 00:10:27.083 ************************************ 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:27.083 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:27.083 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:27.341 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:27.341 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:27.599 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:27.599 18:17:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:27.599 18:17:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:27.599 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:27.599 fio-3.35 00:10:27.599 Starting 1 thread 00:10:30.886 00:10:30.886 test: (groupid=0, jobs=1): err= 0: pid=81519: Thu Jul 11 18:17:17 2024 00:10:30.886 read: IOPS=16.6k, BW=64.9MiB/s (68.1MB/s)(130MiB/2001msec) 00:10:30.886 slat (nsec): min=4433, max=60327, avg=5905.06, stdev=1912.75 00:10:30.886 clat (usec): min=275, max=11422, avg=3829.55, stdev=427.72 00:10:30.886 lat (usec): min=280, max=11482, avg=3835.45, stdev=428.28 00:10:30.886 clat percentiles (usec): 00:10:30.886 | 1.00th=[ 3392], 5.00th=[ 3490], 10.00th=[ 3523], 20.00th=[ 3589], 00:10:30.886 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:10:30.886 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4228], 95.00th=[ 4359], 00:10:30.886 | 99.00th=[ 5211], 99.50th=[ 6456], 99.90th=[ 7898], 99.95th=[10159], 00:10:30.886 | 99.99th=[11207] 00:10:30.886 bw ( KiB/s): min=60944, max=69290, per=99.95%, avg=66455.75, stdev=3859.04, samples=4 00:10:30.886 iops : min=15236, max=17322, avg=16613.75, stdev=964.63, samples=4 00:10:30.886 write: IOPS=16.7k, BW=65.1MiB/s (68.2MB/s)(130MiB/2001msec); 0 zone resets 00:10:30.886 slat (nsec): min=4511, max=56863, avg=6051.17, stdev=1969.83 00:10:30.886 clat (usec): min=245, max=11277, avg=3838.44, stdev=431.24 00:10:30.886 lat (usec): min=250, max=11297, avg=3844.49, stdev=431.81 00:10:30.886 clat percentiles (usec): 00:10:30.886 | 1.00th=[ 3392], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3621], 00:10:30.886 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3785], 00:10:30.886 | 70.00th=[ 3884], 80.00th=[ 4015], 90.00th=[ 4228], 95.00th=[ 4359], 00:10:30.886 | 99.00th=[ 5145], 99.50th=[ 6456], 99.90th=[ 8094], 99.95th=[10290], 00:10:30.886 | 99.99th=[11076] 00:10:30.886 bw ( KiB/s): min=61944, max=69058, per=99.94%, avg=66587.75, stdev=3236.84, samples=4 00:10:30.886 iops : min=15486, max=17264, avg=16646.75, stdev=809.08, samples=4 00:10:30.886 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:30.886 lat (msec) : 2=0.05%, 4=79.92%, 10=19.93%, 20=0.06% 00:10:30.886 cpu : usr=99.05%, sys=0.05%, ctx=7, majf=0, minf=626 00:10:30.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:30.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.886 issued rwts: total=33262,33331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.886 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.886 00:10:30.886 Run status group 0 (all jobs): 00:10:30.886 READ: bw=64.9MiB/s (68.1MB/s), 64.9MiB/s-64.9MiB/s (68.1MB/s-68.1MB/s), io=130MiB (136MB), run=2001-2001msec 00:10:30.886 WRITE: bw=65.1MiB/s (68.2MB/s), 65.1MiB/s-65.1MiB/s (68.2MB/s-68.2MB/s), io=130MiB (137MB), run=2001-2001msec 00:10:30.886 ----------------------------------------------------- 00:10:30.886 Suppressions used: 00:10:30.886 count bytes template 00:10:30.886 1 32 /usr/src/fio/parse.c 00:10:30.886 1 8 libtcmalloc_minimal.so 00:10:30.886 ----------------------------------------------------- 00:10:30.886 00:10:30.886 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:30.886 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:30.886 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:30.886 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:31.145 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:31.145 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:31.404 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:31.405 18:17:17 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:31.405 18:17:17 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:31.663 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:31.663 fio-3.35 00:10:31.663 Starting 1 thread 00:10:34.954 00:10:34.954 test: (groupid=0, jobs=1): err= 0: pid=81580: Thu Jul 11 18:17:20 2024 00:10:34.954 read: IOPS=15.1k, BW=59.0MiB/s (61.8MB/s)(118MiB/2001msec) 00:10:34.955 slat (nsec): min=4393, max=51572, avg=6722.53, stdev=2375.78 00:10:34.955 clat (usec): min=231, max=8405, avg=4221.82, stdev=439.45 00:10:34.955 lat (usec): min=237, max=8456, avg=4228.55, stdev=439.90 00:10:34.955 clat percentiles (usec): 00:10:34.955 | 1.00th=[ 2900], 5.00th=[ 3785], 10.00th=[ 3884], 20.00th=[ 3949], 00:10:34.955 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:10:34.955 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5080], 00:10:34.955 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 5669], 99.95th=[ 7046], 00:10:34.955 | 99.99th=[ 8225] 00:10:34.955 bw ( KiB/s): min=55224, max=62240, per=97.58%, avg=58920.00, stdev=3523.08, samples=3 00:10:34.955 iops : min=13806, max=15560, avg=14730.00, stdev=880.77, samples=3 00:10:34.955 write: IOPS=15.1k, BW=59.0MiB/s (61.9MB/s)(118MiB/2001msec); 0 zone resets 00:10:34.955 slat (nsec): min=4409, max=57048, avg=6721.43, stdev=2346.64 00:10:34.955 clat (usec): min=265, max=8240, avg=4228.43, stdev=442.72 00:10:34.955 lat (usec): min=272, max=8258, avg=4235.15, stdev=443.15 00:10:34.955 clat percentiles (usec): 00:10:34.955 | 1.00th=[ 2835], 5.00th=[ 3785], 10.00th=[ 3884], 20.00th=[ 3949], 00:10:34.955 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:10:34.955 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5080], 00:10:34.955 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 6128], 99.95th=[ 7111], 00:10:34.955 | 99.99th=[ 8029] 00:10:34.955 bw ( KiB/s): min=55576, max=61832, per=97.22%, avg=58725.33, stdev=3128.22, samples=3 00:10:34.955 iops : min=13894, max=15458, avg=14681.33, stdev=782.05, samples=3 00:10:34.955 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:34.955 lat (msec) : 2=0.08%, 4=25.92%, 10=73.96% 00:10:34.955 cpu : usr=98.75%, sys=0.20%, ctx=4, majf=0, minf=626 00:10:34.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:34.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.955 issued rwts: total=30207,30217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.955 00:10:34.955 Run status group 0 (all jobs): 00:10:34.955 READ: bw=59.0MiB/s (61.8MB/s), 59.0MiB/s-59.0MiB/s (61.8MB/s-61.8MB/s), io=118MiB (124MB), run=2001-2001msec 00:10:34.955 WRITE: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=118MiB (124MB), run=2001-2001msec 00:10:34.955 ----------------------------------------------------- 00:10:34.955 Suppressions used: 00:10:34.955 count bytes template 00:10:34.955 1 32 /usr/src/fio/parse.c 00:10:34.955 1 8 libtcmalloc_minimal.so 00:10:34.955 ----------------------------------------------------- 00:10:34.955 00:10:34.955 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:34.955 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:34.955 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:34.955 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:34.955 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:34.955 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:35.213 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:35.213 18:17:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:35.213 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:35.214 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:35.214 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:35.214 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:35.214 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:35.214 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:35.214 18:17:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:35.472 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:35.472 fio-3.35 00:10:35.472 Starting 1 thread 00:10:38.757 00:10:38.757 test: (groupid=0, jobs=1): err= 0: pid=81635: Thu Jul 11 18:17:24 2024 00:10:38.757 read: IOPS=14.5k, BW=56.4MiB/s (59.2MB/s)(113MiB/2001msec) 00:10:38.757 slat (usec): min=4, max=112, avg= 6.59, stdev= 2.51 00:10:38.757 clat (usec): min=278, max=13142, avg=4406.98, stdev=502.98 00:10:38.757 lat (usec): min=284, max=13190, avg=4413.57, stdev=503.59 00:10:38.757 clat percentiles (usec): 00:10:38.757 | 1.00th=[ 3785], 5.00th=[ 3916], 10.00th=[ 3982], 20.00th=[ 4047], 00:10:38.757 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:10:38.757 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5211], 00:10:38.757 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 8848], 99.95th=[11076], 00:10:38.757 | 99.99th=[13042] 00:10:38.757 bw ( KiB/s): min=54504, max=60584, per=98.52%, avg=56944.00, stdev=3212.72, samples=3 00:10:38.757 iops : min=13626, max=15146, avg=14236.00, stdev=803.18, samples=3 00:10:38.757 write: IOPS=14.5k, BW=56.5MiB/s (59.3MB/s)(113MiB/2001msec); 0 zone resets 00:10:38.757 slat (nsec): min=4492, max=49281, avg=6746.48, stdev=2397.83 00:10:38.757 clat (usec): min=305, max=12998, avg=4417.50, stdev=511.67 00:10:38.757 lat (usec): min=311, max=13014, avg=4424.24, stdev=512.26 00:10:38.757 clat percentiles (usec): 00:10:38.757 | 1.00th=[ 3785], 5.00th=[ 3916], 10.00th=[ 3982], 20.00th=[ 4080], 00:10:38.757 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:10:38.757 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5080], 95.00th=[ 5211], 00:10:38.757 | 99.00th=[ 5735], 99.50th=[ 6194], 99.90th=[ 8979], 99.95th=[11338], 00:10:38.757 | 99.99th=[12780] 00:10:38.757 bw ( KiB/s): min=54776, max=59792, per=98.25%, avg=56872.00, stdev=2607.55, samples=3 00:10:38.757 iops : min=13694, max=14948, avg=14218.00, stdev=651.89, samples=3 00:10:38.757 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:38.757 lat (msec) : 2=0.05%, 4=12.37%, 10=87.47%, 20=0.07% 00:10:38.757 cpu : usr=99.00%, sys=0.05%, ctx=8, majf=0, minf=626 00:10:38.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:38.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.757 issued rwts: total=28915,28956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.757 00:10:38.757 Run status group 0 (all jobs): 00:10:38.757 READ: bw=56.4MiB/s (59.2MB/s), 56.4MiB/s-56.4MiB/s (59.2MB/s-59.2MB/s), io=113MiB (118MB), run=2001-2001msec 00:10:38.757 WRITE: bw=56.5MiB/s (59.3MB/s), 56.5MiB/s-56.5MiB/s (59.3MB/s-59.3MB/s), io=113MiB (119MB), run=2001-2001msec 00:10:38.757 ----------------------------------------------------- 00:10:38.757 Suppressions used: 00:10:38.757 count bytes template 00:10:38.757 1 32 /usr/src/fio/parse.c 00:10:38.757 1 8 libtcmalloc_minimal.so 00:10:38.757 ----------------------------------------------------- 00:10:38.757 00:10:38.757 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:38.757 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:38.757 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:38.757 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:39.016 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:39.016 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:39.275 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:39.275 18:17:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:39.275 18:17:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:39.546 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:39.546 fio-3.35 00:10:39.546 Starting 1 thread 00:10:42.841 00:10:42.841 test: (groupid=0, jobs=1): err= 0: pid=81696: Thu Jul 11 18:17:28 2024 00:10:42.841 read: IOPS=16.4k, BW=64.0MiB/s (67.1MB/s)(128MiB/2001msec) 00:10:42.841 slat (nsec): min=4360, max=73798, avg=5939.59, stdev=2166.47 00:10:42.841 clat (usec): min=282, max=11619, avg=3885.44, stdev=467.79 00:10:42.841 lat (usec): min=288, max=11693, avg=3891.38, stdev=468.43 00:10:42.841 clat percentiles (usec): 00:10:42.841 | 1.00th=[ 3392], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3621], 00:10:42.841 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:10:42.841 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 4424], 95.00th=[ 4555], 00:10:42.841 | 99.00th=[ 5276], 99.50th=[ 6849], 99.90th=[ 8291], 99.95th=[10028], 00:10:42.841 | 99.99th=[11469] 00:10:42.841 bw ( KiB/s): min=60888, max=68480, per=99.48%, avg=65176.00, stdev=3890.48, samples=3 00:10:42.841 iops : min=15222, max=17120, avg=16294.00, stdev=972.62, samples=3 00:10:42.841 write: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec); 0 zone resets 00:10:42.841 slat (usec): min=4, max=111, avg= 6.13, stdev= 2.38 00:10:42.841 clat (usec): min=227, max=11487, avg=3898.26, stdev=470.13 00:10:42.841 lat (usec): min=233, max=11505, avg=3904.40, stdev=470.75 00:10:42.841 clat percentiles (usec): 00:10:42.841 | 1.00th=[ 3425], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3621], 00:10:42.841 | 30.00th=[ 3687], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3851], 00:10:42.841 | 70.00th=[ 3949], 80.00th=[ 4113], 90.00th=[ 4424], 95.00th=[ 4621], 00:10:42.841 | 99.00th=[ 5145], 99.50th=[ 6849], 99.90th=[ 8979], 99.95th=[10290], 00:10:42.841 | 99.99th=[11338] 00:10:42.841 bw ( KiB/s): min=61192, max=67904, per=98.95%, avg=64946.67, stdev=3426.30, samples=3 00:10:42.841 iops : min=15298, max=16976, avg=16236.67, stdev=856.58, samples=3 00:10:42.841 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:10:42.841 lat (msec) : 2=0.05%, 4=74.39%, 10=25.47%, 20=0.05% 00:10:42.841 cpu : usr=98.90%, sys=0.15%, ctx=3, majf=0, minf=623 00:10:42.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:42.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.841 issued rwts: total=32774,32833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.841 00:10:42.841 Run status group 0 (all jobs): 00:10:42.841 READ: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=128MiB (134MB), run=2001-2001msec 00:10:42.841 WRITE: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (134MB), run=2001-2001msec 00:10:42.841 ----------------------------------------------------- 00:10:42.841 Suppressions used: 00:10:42.841 count bytes template 00:10:42.841 1 32 /usr/src/fio/parse.c 00:10:42.841 1 8 libtcmalloc_minimal.so 00:10:42.841 ----------------------------------------------------- 00:10:42.841 00:10:42.841 18:17:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:42.841 18:17:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:42.841 00:10:42.841 real 0m15.909s 00:10:42.841 user 0m13.000s 00:10:42.841 sys 0m1.306s 00:10:42.841 18:17:29 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.841 ************************************ 00:10:42.841 END TEST nvme_fio 00:10:42.841 ************************************ 00:10:42.841 18:17:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:42.841 18:17:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:42.841 ************************************ 00:10:42.841 END TEST nvme 00:10:42.841 ************************************ 00:10:42.841 00:10:42.841 real 1m24.737s 00:10:42.841 user 3m31.666s 00:10:42.841 sys 0m12.391s 00:10:42.841 18:17:29 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:42.841 18:17:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.101 18:17:29 -- common/autotest_common.sh@1142 -- # return 0 00:10:43.101 18:17:29 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:10:43.101 18:17:29 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:43.101 18:17:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:43.101 18:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.101 18:17:29 -- common/autotest_common.sh@10 -- # set +x 00:10:43.101 ************************************ 00:10:43.101 START TEST nvme_scc 00:10:43.101 ************************************ 00:10:43.101 18:17:29 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:43.101 * Looking for test storage... 00:10:43.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:43.101 18:17:29 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.101 18:17:29 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.101 18:17:29 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.101 18:17:29 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.101 18:17:29 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.101 18:17:29 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.101 18:17:29 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.101 18:17:29 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:43.101 18:17:29 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:43.101 18:17:29 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:43.101 18:17:29 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.101 18:17:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:43.101 18:17:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:43.101 18:17:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:43.101 18:17:29 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:43.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.619 Waiting for block devices as requested 00:10:43.619 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.876 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.876 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:43.876 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.145 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:49.145 18:17:35 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:49.145 18:17:35 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:49.145 18:17:35 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:49.145 18:17:35 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:49.145 18:17:35 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:49.145 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:49.146 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:49.147 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:49.148 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.149 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:49.150 18:17:35 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:49.150 18:17:35 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:49.150 18:17:35 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:49.150 18:17:35 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.150 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.151 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:49.152 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:49.153 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:49.154 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:49.155 18:17:35 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:49.155 18:17:35 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:49.155 18:17:35 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:49.155 18:17:35 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:49.155 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:49.420 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:49.421 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.422 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:49.423 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.424 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:49.425 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.426 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:49.427 18:17:35 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:49.427 18:17:35 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:49.427 18:17:35 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:49.427 18:17:35 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.427 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.428 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:49.429 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:49.430 18:17:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:49.430 18:17:35 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:10:49.690 18:17:35 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:10:49.690 18:17:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:49.690 18:17:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:49.690 18:17:35 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.883 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.883 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.883 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.883 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.883 18:17:37 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:50.883 18:17:37 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:50.883 18:17:37 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.883 18:17:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:50.883 ************************************ 00:10:50.883 START TEST nvme_simple_copy 00:10:50.883 ************************************ 00:10:50.883 18:17:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:51.142 Initializing NVMe Controllers 00:10:51.142 Attaching to 0000:00:10.0 00:10:51.142 Controller supports SCC. Attached to 0000:00:10.0 00:10:51.142 Namespace ID: 1 size: 6GB 00:10:51.142 Initialization complete. 00:10:51.142 00:10:51.142 Controller QEMU NVMe Ctrl (12340 ) 00:10:51.142 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:51.142 Namespace Block Size:4096 00:10:51.142 Writing LBAs 0 to 63 with Random Data 00:10:51.142 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:51.142 LBAs matching Written Data: 64 00:10:51.142 ************************************ 00:10:51.142 END TEST nvme_simple_copy 00:10:51.142 ************************************ 00:10:51.142 00:10:51.142 real 0m0.265s 00:10:51.142 user 0m0.095s 00:10:51.142 sys 0m0.068s 00:10:51.142 18:17:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.142 18:17:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:51.142 18:17:37 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:10:51.142 ************************************ 00:10:51.142 END TEST nvme_scc 00:10:51.142 ************************************ 00:10:51.142 00:10:51.142 real 0m8.138s 00:10:51.142 user 0m1.322s 00:10:51.142 sys 0m1.667s 00:10:51.142 18:17:37 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.142 18:17:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:51.142 18:17:37 -- common/autotest_common.sh@1142 -- # return 0 00:10:51.142 18:17:37 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:10:51.142 18:17:37 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:10:51.142 18:17:37 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:10:51.142 18:17:37 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:10:51.142 18:17:37 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:51.142 18:17:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:51.142 18:17:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.142 18:17:37 -- common/autotest_common.sh@10 -- # set +x 00:10:51.142 ************************************ 00:10:51.142 START TEST nvme_fdp 00:10:51.142 ************************************ 00:10:51.142 18:17:37 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:10:51.142 * Looking for test storage... 00:10:51.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:51.142 18:17:37 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:51.401 18:17:37 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.401 18:17:37 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.401 18:17:37 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.401 18:17:37 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.401 18:17:37 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.401 18:17:37 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.401 18:17:37 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:51.401 18:17:37 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:51.401 18:17:37 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:51.401 18:17:37 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:51.401 18:17:37 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:51.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.919 Waiting for block devices as requested 00:10:51.919 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.919 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:51.919 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.178 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.483 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:57.483 18:17:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:57.483 18:17:43 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:57.483 18:17:43 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:57.483 18:17:43 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.483 18:17:43 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.483 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:57.484 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.485 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.486 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:57.487 18:17:43 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:57.487 18:17:43 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:57.487 18:17:43 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.487 18:17:43 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:57.487 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.488 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.489 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.490 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.491 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:57.492 18:17:43 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:57.492 18:17:43 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:57.492 18:17:43 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.492 18:17:43 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:57.492 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.493 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.494 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:57.495 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.496 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.497 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:57.498 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:57.499 18:17:43 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:57.499 18:17:43 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:57.499 18:17:43 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:57.499 18:17:43 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:57.499 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.500 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.761 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:57.762 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:57.763 18:17:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:57.764 18:17:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:10:57.764 18:17:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:57.764 18:17:43 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:57.764 18:17:43 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:58.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.898 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.898 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.898 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.898 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:58.898 18:17:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:58.898 18:17:45 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:58.898 18:17:45 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.898 18:17:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:58.898 ************************************ 00:10:58.898 START TEST nvme_flexible_data_placement 00:10:58.898 ************************************ 00:10:58.898 18:17:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:59.157 Initializing NVMe Controllers 00:10:59.157 Attaching to 0000:00:13.0 00:10:59.157 Controller supports FDP Attached to 0000:00:13.0 00:10:59.157 Namespace ID: 1 Endurance Group ID: 1 00:10:59.157 Initialization complete. 00:10:59.157 00:10:59.157 ================================== 00:10:59.157 == FDP tests for Namespace: #01 == 00:10:59.157 ================================== 00:10:59.157 00:10:59.157 Get Feature: FDP: 00:10:59.157 ================= 00:10:59.157 Enabled: Yes 00:10:59.157 FDP configuration Index: 0 00:10:59.157 00:10:59.157 FDP configurations log page 00:10:59.157 =========================== 00:10:59.157 Number of FDP configurations: 1 00:10:59.157 Version: 0 00:10:59.157 Size: 112 00:10:59.157 FDP Configuration Descriptor: 0 00:10:59.157 Descriptor Size: 96 00:10:59.157 Reclaim Group Identifier format: 2 00:10:59.157 FDP Volatile Write Cache: Not Present 00:10:59.157 FDP Configuration: Valid 00:10:59.157 Vendor Specific Size: 0 00:10:59.157 Number of Reclaim Groups: 2 00:10:59.157 Number of Recalim Unit Handles: 8 00:10:59.157 Max Placement Identifiers: 128 00:10:59.157 Number of Namespaces Suppprted: 256 00:10:59.157 Reclaim unit Nominal Size: 6000000 bytes 00:10:59.157 Estimated Reclaim Unit Time Limit: Not Reported 00:10:59.157 RUH Desc #000: RUH Type: Initially Isolated 00:10:59.157 RUH Desc #001: RUH Type: Initially Isolated 00:10:59.157 RUH Desc #002: RUH Type: Initially Isolated 00:10:59.157 RUH Desc #003: RUH Type: Initially Isolated 00:10:59.157 RUH Desc #004: RUH Type: Initially Isolated 00:10:59.157 RUH Desc #005: RUH Type: Initially Isolated 00:10:59.157 RUH Desc #006: RUH Type: Initially Isolated 00:10:59.157 RUH Desc #007: RUH Type: Initially Isolated 00:10:59.157 00:10:59.157 FDP reclaim unit handle usage log page 00:10:59.157 ====================================== 00:10:59.157 Number of Reclaim Unit Handles: 8 00:10:59.157 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:59.157 RUH Usage Desc #001: RUH Attributes: Unused 00:10:59.157 RUH Usage Desc #002: RUH Attributes: Unused 00:10:59.157 RUH Usage Desc #003: RUH Attributes: Unused 00:10:59.157 RUH Usage Desc #004: RUH Attributes: Unused 00:10:59.157 RUH Usage Desc #005: RUH Attributes: Unused 00:10:59.157 RUH Usage Desc #006: RUH Attributes: Unused 00:10:59.157 RUH Usage Desc #007: RUH Attributes: Unused 00:10:59.157 00:10:59.157 FDP statistics log page 00:10:59.157 ======================= 00:10:59.157 Host bytes with metadata written: 1698021376 00:10:59.157 Media bytes with metadata written: 1698279424 00:10:59.157 Media bytes erased: 0 00:10:59.157 00:10:59.157 FDP Reclaim unit handle status 00:10:59.157 ============================== 00:10:59.157 Number of RUHS descriptors: 2 00:10:59.157 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000ca4 00:10:59.157 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:59.157 00:10:59.157 FDP write on placement id: 0 success 00:10:59.157 00:10:59.157 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:59.157 00:10:59.157 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:59.157 00:10:59.157 Get Feature: FDP Events for Placement handle: #0 00:10:59.157 ======================== 00:10:59.157 Number of FDP Events: 6 00:10:59.157 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:59.157 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:59.157 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:59.157 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:59.157 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:59.157 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:59.157 00:10:59.157 FDP events log page 00:10:59.157 =================== 00:10:59.157 Number of FDP events: 1 00:10:59.157 FDP Event #0: 00:10:59.157 Event Type: RU Not Written to Capacity 00:10:59.157 Placement Identifier: Valid 00:10:59.157 NSID: Valid 00:10:59.157 Location: Valid 00:10:59.157 Placement Identifier: 0 00:10:59.157 Event Timestamp: 3 00:10:59.157 Namespace Identifier: 1 00:10:59.157 Reclaim Group Identifier: 0 00:10:59.157 Reclaim Unit Handle Identifier: 0 00:10:59.157 00:10:59.157 FDP test passed 00:10:59.157 00:10:59.157 real 0m0.226s 00:10:59.157 user 0m0.066s 00:10:59.157 sys 0m0.058s 00:10:59.157 18:17:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.157 18:17:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:59.157 ************************************ 00:10:59.157 END TEST nvme_flexible_data_placement 00:10:59.157 ************************************ 00:10:59.157 18:17:45 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:10:59.157 ************************************ 00:10:59.157 END TEST nvme_fdp 00:10:59.157 ************************************ 00:10:59.157 00:10:59.157 real 0m8.022s 00:10:59.157 user 0m1.254s 00:10:59.157 sys 0m1.658s 00:10:59.157 18:17:45 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.157 18:17:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:59.157 18:17:45 -- common/autotest_common.sh@1142 -- # return 0 00:10:59.157 18:17:45 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:10:59.157 18:17:45 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:59.157 18:17:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:59.157 18:17:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.157 18:17:45 -- common/autotest_common.sh@10 -- # set +x 00:10:59.157 ************************************ 00:10:59.157 START TEST nvme_rpc 00:10:59.157 ************************************ 00:10:59.157 18:17:45 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:59.417 * Looking for test storage... 00:10:59.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:59.417 18:17:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.417 18:17:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:59.417 18:17:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:59.417 18:17:45 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=83033 00:10:59.417 18:17:45 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:59.417 18:17:45 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:59.417 18:17:45 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 83033 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 83033 ']' 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.417 18:17:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.417 [2024-07-11 18:17:45.826847] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:59.417 [2024-07-11 18:17:45.827535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83033 ] 00:10:59.676 [2024-07-11 18:17:45.980602] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:59.676 [2024-07-11 18:17:46.023918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.676 [2024-07-11 18:17:46.023976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.609 18:17:46 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.609 18:17:46 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:00.609 18:17:46 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:00.867 Nvme0n1 00:11:00.867 18:17:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:00.867 18:17:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:01.125 request: 00:11:01.125 { 00:11:01.125 "bdev_name": "Nvme0n1", 00:11:01.125 "filename": "non_existing_file", 00:11:01.125 "method": "bdev_nvme_apply_firmware", 00:11:01.125 "req_id": 1 00:11:01.125 } 00:11:01.125 Got JSON-RPC error response 00:11:01.125 response: 00:11:01.125 { 00:11:01.125 "code": -32603, 00:11:01.125 "message": "open file failed." 00:11:01.125 } 00:11:01.125 18:17:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:01.125 18:17:47 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:01.125 18:17:47 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:01.384 18:17:47 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:01.384 18:17:47 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 83033 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 83033 ']' 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 83033 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83033 00:11:01.384 killing process with pid 83033 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83033' 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@967 -- # kill 83033 00:11:01.384 18:17:47 nvme_rpc -- common/autotest_common.sh@972 -- # wait 83033 00:11:01.643 ************************************ 00:11:01.643 END TEST nvme_rpc 00:11:01.643 ************************************ 00:11:01.643 00:11:01.643 real 0m2.305s 00:11:01.643 user 0m4.691s 00:11:01.643 sys 0m0.525s 00:11:01.643 18:17:47 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.643 18:17:47 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.643 18:17:47 -- common/autotest_common.sh@1142 -- # return 0 00:11:01.643 18:17:47 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:01.643 18:17:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:01.643 18:17:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.643 18:17:47 -- common/autotest_common.sh@10 -- # set +x 00:11:01.643 ************************************ 00:11:01.643 START TEST nvme_rpc_timeouts 00:11:01.643 ************************************ 00:11:01.643 18:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:01.643 * Looking for test storage... 00:11:01.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:01.643 18:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.643 18:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83087 00:11:01.643 18:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83087 00:11:01.643 18:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83111 00:11:01.643 18:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:01.643 18:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:01.643 18:17:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83111 00:11:01.643 18:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 83111 ']' 00:11:01.643 18:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.643 18:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.643 18:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.643 18:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.643 18:17:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:01.903 [2024-07-11 18:17:48.105608] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:01.903 [2024-07-11 18:17:48.105829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83111 ] 00:11:01.903 [2024-07-11 18:17:48.254964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:01.903 [2024-07-11 18:17:48.292479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.903 [2024-07-11 18:17:48.292527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.838 18:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.838 18:17:49 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:11:02.838 18:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:02.838 Checking default timeout settings: 00:11:02.838 18:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:03.100 Making settings changes with rpc: 00:11:03.100 18:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:03.100 18:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:03.358 Check default vs. modified settings: 00:11:03.358 18:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:03.358 18:17:49 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83087 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83087 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:03.924 Setting action_on_timeout is changed as expected. 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83087 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83087 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:03.924 Setting timeout_us is changed as expected. 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83087 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83087 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:03.924 Setting timeout_admin_us is changed as expected. 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83087 /tmp/settings_modified_83087 00:11:03.924 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83111 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 83111 ']' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 83111 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83111 00:11:03.924 killing process with pid 83111 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83111' 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 83111 00:11:03.924 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 83111 00:11:04.182 RPC TIMEOUT SETTING TEST PASSED. 00:11:04.182 18:17:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:04.182 00:11:04.182 real 0m2.483s 00:11:04.182 user 0m5.212s 00:11:04.182 sys 0m0.520s 00:11:04.182 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:04.182 18:17:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:04.182 ************************************ 00:11:04.182 END TEST nvme_rpc_timeouts 00:11:04.182 ************************************ 00:11:04.183 18:17:50 -- common/autotest_common.sh@1142 -- # return 0 00:11:04.183 18:17:50 -- spdk/autotest.sh@243 -- # uname -s 00:11:04.183 18:17:50 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:04.183 18:17:50 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:04.183 18:17:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:04.183 18:17:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.183 18:17:50 -- common/autotest_common.sh@10 -- # set +x 00:11:04.183 ************************************ 00:11:04.183 START TEST sw_hotplug 00:11:04.183 ************************************ 00:11:04.183 18:17:50 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:04.183 * Looking for test storage... 00:11:04.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:04.183 18:17:50 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:04.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:04.750 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:04.750 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:04.750 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:04.750 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:04.750 18:17:51 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:04.750 18:17:51 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:04.750 18:17:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:04.750 18:17:51 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:04.750 18:17:51 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:04.750 18:17:51 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:04.750 18:17:51 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:04.750 18:17:51 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:05.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.318 Waiting for block devices as requested 00:11:05.318 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.576 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.576 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.576 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:10.853 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:10.853 18:17:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:10.853 18:17:57 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:11.112 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:11.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.112 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:11.678 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:11.678 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.678 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:11.936 18:17:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=83945 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:11.936 18:17:58 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:11:11.936 18:17:58 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:11:11.936 18:17:58 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:11:11.936 18:17:58 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:11:11.936 18:17:58 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:11.936 18:17:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:12.195 Initializing NVMe Controllers 00:11:12.195 Attaching to 0000:00:10.0 00:11:12.195 Attaching to 0000:00:11.0 00:11:12.195 Attached to 0000:00:10.0 00:11:12.195 Attached to 0000:00:11.0 00:11:12.195 Initialization complete. Starting I/O... 00:11:12.195 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:12.195 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:12.195 00:11:13.131 QEMU NVMe Ctrl (12340 ): 1336 I/Os completed (+1336) 00:11:13.131 QEMU NVMe Ctrl (12341 ): 1415 I/Os completed (+1415) 00:11:13.131 00:11:14.067 QEMU NVMe Ctrl (12340 ): 3276 I/Os completed (+1940) 00:11:14.067 QEMU NVMe Ctrl (12341 ): 3395 I/Os completed (+1980) 00:11:14.067 00:11:15.005 QEMU NVMe Ctrl (12340 ): 5555 I/Os completed (+2279) 00:11:15.005 QEMU NVMe Ctrl (12341 ): 5726 I/Os completed (+2331) 00:11:15.005 00:11:16.380 QEMU NVMe Ctrl (12340 ): 7796 I/Os completed (+2241) 00:11:16.380 QEMU NVMe Ctrl (12341 ): 8039 I/Os completed (+2313) 00:11:16.380 00:11:17.315 QEMU NVMe Ctrl (12340 ): 9814 I/Os completed (+2018) 00:11:17.315 QEMU NVMe Ctrl (12341 ): 10141 I/Os completed (+2102) 00:11:17.315 00:11:17.882 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.882 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.882 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.882 [2024-07-11 18:18:04.218070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:17.882 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:17.882 [2024-07-11 18:18:04.219926] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.882 [2024-07-11 18:18:04.220185] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.882 [2024-07-11 18:18:04.220223] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.220247] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:17.883 [2024-07-11 18:18:04.222153] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.222233] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.222256] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.222277] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 EAL: Cannot open sysfs resource 00:11:17.883 EAL: pci_scan_one(): cannot parse resource 00:11:17.883 EAL: Scan for (pci) bus failed. 00:11:17.883 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.883 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.883 [2024-07-11 18:18:04.247380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:17.883 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:17.883 [2024-07-11 18:18:04.248892] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.248952] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.248979] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.249001] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:17.883 [2024-07-11 18:18:04.250864] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.250910] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.250939] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 [2024-07-11 18:18:04.250959] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.883 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:17.883 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:17.883 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.883 EAL: Scan for (pci) bus failed. 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:18.141 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:18.141 Attaching to 0000:00:10.0 00:11:18.141 Attached to 0000:00:10.0 00:11:18.141 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:18.400 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.400 18:18:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:18.400 Attaching to 0000:00:11.0 00:11:18.400 Attached to 0000:00:11.0 00:11:19.336 QEMU NVMe Ctrl (12340 ): 1957 I/Os completed (+1957) 00:11:19.336 QEMU NVMe Ctrl (12341 ): 1810 I/Os completed (+1810) 00:11:19.336 00:11:20.271 QEMU NVMe Ctrl (12340 ): 4041 I/Os completed (+2084) 00:11:20.271 QEMU NVMe Ctrl (12341 ): 3963 I/Os completed (+2153) 00:11:20.271 00:11:21.207 QEMU NVMe Ctrl (12340 ): 6045 I/Os completed (+2004) 00:11:21.207 QEMU NVMe Ctrl (12341 ): 6045 I/Os completed (+2082) 00:11:21.207 00:11:22.149 QEMU NVMe Ctrl (12340 ): 8241 I/Os completed (+2196) 00:11:22.149 QEMU NVMe Ctrl (12341 ): 8285 I/Os completed (+2240) 00:11:22.149 00:11:23.086 QEMU NVMe Ctrl (12340 ): 10453 I/Os completed (+2212) 00:11:23.086 QEMU NVMe Ctrl (12341 ): 10536 I/Os completed (+2251) 00:11:23.086 00:11:24.023 QEMU NVMe Ctrl (12340 ): 12473 I/Os completed (+2020) 00:11:24.023 QEMU NVMe Ctrl (12341 ): 12656 I/Os completed (+2120) 00:11:24.024 00:11:25.401 QEMU NVMe Ctrl (12340 ): 14701 I/Os completed (+2228) 00:11:25.401 QEMU NVMe Ctrl (12341 ): 14914 I/Os completed (+2258) 00:11:25.401 00:11:26.338 QEMU NVMe Ctrl (12340 ): 16909 I/Os completed (+2208) 00:11:26.338 QEMU NVMe Ctrl (12341 ): 17153 I/Os completed (+2239) 00:11:26.338 00:11:27.270 QEMU NVMe Ctrl (12340 ): 19018 I/Os completed (+2109) 00:11:27.270 QEMU NVMe Ctrl (12341 ): 19339 I/Os completed (+2186) 00:11:27.270 00:11:28.200 QEMU NVMe Ctrl (12340 ): 21210 I/Os completed (+2192) 00:11:28.200 QEMU NVMe Ctrl (12341 ): 21571 I/Os completed (+2232) 00:11:28.200 00:11:29.132 QEMU NVMe Ctrl (12340 ): 23426 I/Os completed (+2216) 00:11:29.132 QEMU NVMe Ctrl (12341 ): 23840 I/Os completed (+2269) 00:11:29.132 00:11:30.069 QEMU NVMe Ctrl (12340 ): 25530 I/Os completed (+2104) 00:11:30.069 QEMU NVMe Ctrl (12341 ): 26019 I/Os completed (+2179) 00:11:30.069 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.327 [2024-07-11 18:18:16.564545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:30.327 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:30.327 [2024-07-11 18:18:16.567426] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.567636] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.567705] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.567825] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:30.327 [2024-07-11 18:18:16.569901] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.570066] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.570257] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.570434] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.327 [2024-07-11 18:18:16.595151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:30.327 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:30.327 [2024-07-11 18:18:16.596740] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.596794] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.596820] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.596839] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:30.327 [2024-07-11 18:18:16.598442] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.598496] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.598522] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 [2024-07-11 18:18:16.598539] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.327 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:30.327 EAL: Scan for (pci) bus failed. 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.327 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:30.586 Attaching to 0000:00:10.0 00:11:30.586 Attached to 0000:00:10.0 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.586 18:18:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:30.586 Attaching to 0000:00:11.0 00:11:30.586 Attached to 0000:00:11.0 00:11:31.154 QEMU NVMe Ctrl (12340 ): 1208 I/Os completed (+1208) 00:11:31.154 QEMU NVMe Ctrl (12341 ): 1097 I/Os completed (+1097) 00:11:31.154 00:11:32.087 QEMU NVMe Ctrl (12340 ): 3240 I/Os completed (+2032) 00:11:32.087 QEMU NVMe Ctrl (12341 ): 3246 I/Os completed (+2149) 00:11:32.087 00:11:33.021 QEMU NVMe Ctrl (12340 ): 5280 I/Os completed (+2040) 00:11:33.021 QEMU NVMe Ctrl (12341 ): 5394 I/Os completed (+2148) 00:11:33.021 00:11:34.404 QEMU NVMe Ctrl (12340 ): 7284 I/Os completed (+2004) 00:11:34.404 QEMU NVMe Ctrl (12341 ): 7461 I/Os completed (+2067) 00:11:34.404 00:11:35.355 QEMU NVMe Ctrl (12340 ): 9476 I/Os completed (+2192) 00:11:35.355 QEMU NVMe Ctrl (12341 ): 9678 I/Os completed (+2217) 00:11:35.355 00:11:36.290 QEMU NVMe Ctrl (12340 ): 11608 I/Os completed (+2132) 00:11:36.290 QEMU NVMe Ctrl (12341 ): 11853 I/Os completed (+2175) 00:11:36.290 00:11:37.224 QEMU NVMe Ctrl (12340 ): 13564 I/Os completed (+1956) 00:11:37.224 QEMU NVMe Ctrl (12341 ): 13928 I/Os completed (+2075) 00:11:37.224 00:11:38.162 QEMU NVMe Ctrl (12340 ): 15681 I/Os completed (+2117) 00:11:38.162 QEMU NVMe Ctrl (12341 ): 16113 I/Os completed (+2185) 00:11:38.162 00:11:39.100 QEMU NVMe Ctrl (12340 ): 17817 I/Os completed (+2136) 00:11:39.100 QEMU NVMe Ctrl (12341 ): 18305 I/Os completed (+2192) 00:11:39.100 00:11:40.037 QEMU NVMe Ctrl (12340 ): 19925 I/Os completed (+2108) 00:11:40.037 QEMU NVMe Ctrl (12341 ): 20498 I/Os completed (+2193) 00:11:40.037 00:11:41.424 QEMU NVMe Ctrl (12340 ): 22123 I/Os completed (+2198) 00:11:41.424 QEMU NVMe Ctrl (12341 ): 22735 I/Os completed (+2237) 00:11:41.424 00:11:42.008 QEMU NVMe Ctrl (12340 ): 24331 I/Os completed (+2208) 00:11:42.008 QEMU NVMe Ctrl (12341 ): 24977 I/Os completed (+2242) 00:11:42.008 00:11:42.575 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:42.575 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.575 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.575 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.575 [2024-07-11 18:18:28.906308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:42.575 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:42.575 [2024-07-11 18:18:28.908167] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.908352] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.908504] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.908581] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:42.575 [2024-07-11 18:18:28.910573] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.910732] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.910799] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.911016] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:10.0/class 00:11:42.575 EAL: Scan for (pci) bus failed. 00:11:42.575 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.575 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.575 [2024-07-11 18:18:28.927504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:42.575 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:42.575 [2024-07-11 18:18:28.929112] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.929296] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.929439] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.929570] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:42.575 [2024-07-11 18:18:28.931387] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.575 [2024-07-11 18:18:28.931553] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.576 [2024-07-11 18:18:28.931679] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.576 [2024-07-11 18:18:28.931738] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.576 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:42.576 18:18:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:42.576 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:42.576 EAL: Scan for (pci) bus failed. 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:42.834 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:42.834 Attaching to 0000:00:10.0 00:11:42.834 Attached to 0000:00:10.0 00:11:43.093 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.093 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.093 18:18:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.093 Attaching to 0000:00:11.0 00:11:43.093 Attached to 0000:00:11.0 00:11:43.093 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:43.093 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:43.093 [2024-07-11 18:18:29.283051] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:55.296 18:18:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:55.296 18:18:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.296 18:18:41 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.07 00:11:55.296 18:18:41 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.07 00:11:55.296 18:18:41 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:11:55.296 18:18:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.07 00:11:55.296 18:18:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.07 2 00:11:55.296 remove_attach_helper took 43.07s to complete (handling 2 nvme drive(s)) 18:18:41 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 83945 00:12:01.858 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (83945) - No such process 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 83945 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=84495 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:01.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:01.858 18:18:47 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 84495 00:12:01.858 18:18:47 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 84495 ']' 00:12:01.858 18:18:47 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.858 18:18:47 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.858 18:18:47 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.858 18:18:47 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.858 18:18:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.858 [2024-07-11 18:18:47.401717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:01.858 [2024-07-11 18:18:47.401938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84495 ] 00:12:01.858 [2024-07-11 18:18:47.551269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.858 [2024-07-11 18:18:47.593893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:02.117 18:18:48 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:02.117 18:18:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.683 18:18:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.683 18:18:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.683 [2024-07-11 18:18:54.405358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:08.683 18:18:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.683 [2024-07-11 18:18:54.407630] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.683 [2024-07-11 18:18:54.407698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.683 [2024-07-11 18:18:54.407720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.683 [2024-07-11 18:18:54.407743] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.683 [2024-07-11 18:18:54.407757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.683 [2024-07-11 18:18:54.407771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.683 [2024-07-11 18:18:54.407784] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.683 [2024-07-11 18:18:54.407800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.683 [2024-07-11 18:18:54.407812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.683 [2024-07-11 18:18:54.407825] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.683 [2024-07-11 18:18:54.407837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.683 [2024-07-11 18:18:54.407851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:08.683 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:08.683 [2024-07-11 18:18:54.805355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:08.683 [2024-07-11 18:18:54.807763] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.683 [2024-07-11 18:18:54.807823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.683 [2024-07-11 18:18:54.807846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.683 [2024-07-11 18:18:54.807866] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.683 [2024-07-11 18:18:54.807880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.683 [2024-07-11 18:18:54.807893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.683 [2024-07-11 18:18:54.807907] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.684 [2024-07-11 18:18:54.807919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.684 [2024-07-11 18:18:54.807935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.684 [2024-07-11 18:18:54.807946] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:08.684 [2024-07-11 18:18:54.807962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:08.684 [2024-07-11 18:18:54.807974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.684 18:18:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.684 18:18:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.684 18:18:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:08.684 18:18:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:08.684 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.684 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.684 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:08.994 18:18:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.231 18:19:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.231 18:19:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.231 18:19:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.231 18:19:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.231 18:19:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.231 18:19:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.231 [2024-07-11 18:19:07.405548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:21.231 [2024-07-11 18:19:07.408538] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.231 [2024-07-11 18:19:07.408797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.231 [2024-07-11 18:19:07.409047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.231 [2024-07-11 18:19:07.409320] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.231 [2024-07-11 18:19:07.409519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.231 [2024-07-11 18:19:07.409731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.231 [2024-07-11 18:19:07.409968] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.231 [2024-07-11 18:19:07.410192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.231 [2024-07-11 18:19:07.410425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.231 [2024-07-11 18:19:07.410629] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.231 [2024-07-11 18:19:07.410759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.231 [2024-07-11 18:19:07.410963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:21.231 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:21.490 [2024-07-11 18:19:07.805581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:21.490 [2024-07-11 18:19:07.808057] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.490 [2024-07-11 18:19:07.808299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.490 [2024-07-11 18:19:07.808466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.490 [2024-07-11 18:19:07.808706] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.490 [2024-07-11 18:19:07.808827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.490 [2024-07-11 18:19:07.808960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.490 [2024-07-11 18:19:07.809150] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.490 [2024-07-11 18:19:07.809385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.490 [2024-07-11 18:19:07.809576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.490 [2024-07-11 18:19:07.809718] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.490 [2024-07-11 18:19:07.809874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.490 [2024-07-11 18:19:07.810028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.748 18:19:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.748 18:19:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.748 18:19:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:21.748 18:19:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:21.748 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:21.748 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:21.748 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:21.748 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:22.007 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.007 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.007 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.007 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:22.007 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:22.007 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.007 18:19:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.214 18:19:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.214 18:19:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.214 18:19:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.214 18:19:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.214 18:19:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.214 [2024-07-11 18:19:20.405716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:34.214 [2024-07-11 18:19:20.408228] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.214 [2024-07-11 18:19:20.408312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.214 [2024-07-11 18:19:20.408334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.214 [2024-07-11 18:19:20.408359] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.214 [2024-07-11 18:19:20.408373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.214 [2024-07-11 18:19:20.408388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.214 [2024-07-11 18:19:20.408400] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.214 [2024-07-11 18:19:20.408414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.214 [2024-07-11 18:19:20.408427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.214 [2024-07-11 18:19:20.408443] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.214 [2024-07-11 18:19:20.408456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.214 [2024-07-11 18:19:20.408470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.214 18:19:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:34.214 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:34.473 [2024-07-11 18:19:20.805714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:34.473 [2024-07-11 18:19:20.808007] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.473 [2024-07-11 18:19:20.808068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.473 [2024-07-11 18:19:20.808091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.473 [2024-07-11 18:19:20.808144] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.473 [2024-07-11 18:19:20.808161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.473 [2024-07-11 18:19:20.808174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.473 [2024-07-11 18:19:20.808209] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.473 [2024-07-11 18:19:20.808222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.473 [2024-07-11 18:19:20.808237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.473 [2024-07-11 18:19:20.808249] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.473 [2024-07-11 18:19:20.808263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.473 [2024-07-11 18:19:20.808276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.731 18:19:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.731 18:19:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.731 18:19:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:34.731 18:19:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:34.731 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:34.731 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:34.731 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.989 18:19:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.00 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.00 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.00 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.00 2 00:12:47.193 remove_attach_helper took 45.00s to complete (handling 2 nvme drive(s)) 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:47.193 18:19:33 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:47.193 18:19:33 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.751 18:19:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.751 18:19:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.751 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.751 [2024-07-11 18:19:39.437217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:53.751 [2024-07-11 18:19:39.438723] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.751 [2024-07-11 18:19:39.438789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.751 [2024-07-11 18:19:39.438810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.751 [2024-07-11 18:19:39.438831] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.751 [2024-07-11 18:19:39.438844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.752 [2024-07-11 18:19:39.438859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.752 [2024-07-11 18:19:39.438872] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.752 [2024-07-11 18:19:39.438886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.752 [2024-07-11 18:19:39.438898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.752 [2024-07-11 18:19:39.438914] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.752 [2024-07-11 18:19:39.438926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.752 [2024-07-11 18:19:39.438940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.752 18:19:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:53.752 [2024-07-11 18:19:39.837211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:53.752 [2024-07-11 18:19:39.838675] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.752 [2024-07-11 18:19:39.838732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.752 [2024-07-11 18:19:39.838754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.752 [2024-07-11 18:19:39.838772] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.752 [2024-07-11 18:19:39.838786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.752 [2024-07-11 18:19:39.838798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.752 [2024-07-11 18:19:39.838812] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.752 [2024-07-11 18:19:39.838823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.752 [2024-07-11 18:19:39.838836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.752 [2024-07-11 18:19:39.838847] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:53.752 [2024-07-11 18:19:39.838860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:53.752 [2024-07-11 18:19:39.838872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.752 18:19:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.752 18:19:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.752 18:19:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.752 18:19:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.752 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:53.752 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:53.752 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:53.752 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:53.752 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.011 18:19:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.262 18:19:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.262 18:19:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.262 18:19:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.262 18:19:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.262 18:19:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.262 [2024-07-11 18:19:52.437414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:06.262 [2024-07-11 18:19:52.439083] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.262 [2024-07-11 18:19:52.439310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.262 [2024-07-11 18:19:52.439495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.262 [2024-07-11 18:19:52.439653] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.262 [2024-07-11 18:19:52.439800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.262 [2024-07-11 18:19:52.439938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.262 [2024-07-11 18:19:52.440081] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.262 [2024-07-11 18:19:52.440262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.262 [2024-07-11 18:19:52.440396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.262 [2024-07-11 18:19:52.440565] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.262 [2024-07-11 18:19:52.440688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.262 [2024-07-11 18:19:52.440844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.262 18:19:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:06.262 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:06.521 [2024-07-11 18:19:52.837461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:06.521 [2024-07-11 18:19:52.839476] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.521 [2024-07-11 18:19:52.839686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.521 [2024-07-11 18:19:52.839842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.521 [2024-07-11 18:19:52.840173] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.521 [2024-07-11 18:19:52.840343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.521 [2024-07-11 18:19:52.840532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.521 [2024-07-11 18:19:52.840679] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.521 [2024-07-11 18:19:52.840799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.521 [2024-07-11 18:19:52.840941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.521 [2024-07-11 18:19:52.841131] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.521 [2024-07-11 18:19:52.841257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.521 [2024-07-11 18:19:52.841462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.779 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:06.779 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:06.779 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:06.779 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.779 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.779 18:19:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.779 18:19:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.779 18:19:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.779 18:19:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.779 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:06.779 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:06.779 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:06.779 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:06.779 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.037 18:19:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.236 18:20:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.236 18:20:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.236 18:20:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.236 18:20:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.236 18:20:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.236 18:20:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.236 [2024-07-11 18:20:05.537625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:19.236 [2024-07-11 18:20:05.539163] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.236 [2024-07-11 18:20:05.539240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.236 [2024-07-11 18:20:05.539261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.236 [2024-07-11 18:20:05.539284] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.236 [2024-07-11 18:20:05.539298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.236 [2024-07-11 18:20:05.539315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.236 [2024-07-11 18:20:05.539327] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.236 [2024-07-11 18:20:05.539341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.236 [2024-07-11 18:20:05.539353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.236 [2024-07-11 18:20:05.539368] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.236 [2024-07-11 18:20:05.539380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.236 [2024-07-11 18:20:05.539393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:19.236 18:20:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:19.804 [2024-07-11 18:20:05.937611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:19.804 [2024-07-11 18:20:05.939004] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.804 [2024-07-11 18:20:05.939062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.804 [2024-07-11 18:20:05.939084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.804 [2024-07-11 18:20:05.939230] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.804 [2024-07-11 18:20:05.939254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.804 [2024-07-11 18:20:05.939268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.804 [2024-07-11 18:20:05.939284] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.804 [2024-07-11 18:20:05.939296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.804 [2024-07-11 18:20:05.939313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.804 [2024-07-11 18:20:05.939324] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.804 [2024-07-11 18:20:05.939338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.804 [2024-07-11 18:20:05.939350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.804 18:20:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.804 18:20:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.804 18:20:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:19.804 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.063 18:20:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.11 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.11 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.11 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.11 2 00:13:32.274 remove_attach_helper took 45.11s to complete (handling 2 nvme drive(s)) 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:32.274 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 84495 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 84495 ']' 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 84495 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84495 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84495' 00:13:32.274 killing process with pid 84495 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@967 -- # kill 84495 00:13:32.274 18:20:18 sw_hotplug -- common/autotest_common.sh@972 -- # wait 84495 00:13:32.533 18:20:18 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:32.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:33.359 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:33.359 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:33.359 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.359 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:33.619 00:13:33.619 real 2m29.348s 00:13:33.619 user 1m48.992s 00:13:33.619 sys 0m20.080s 00:13:33.619 18:20:19 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:33.619 18:20:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:33.619 ************************************ 00:13:33.619 END TEST sw_hotplug 00:13:33.619 ************************************ 00:13:33.619 18:20:19 -- common/autotest_common.sh@1142 -- # return 0 00:13:33.619 18:20:19 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:13:33.619 18:20:19 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:33.619 18:20:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:33.619 18:20:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.619 18:20:19 -- common/autotest_common.sh@10 -- # set +x 00:13:33.619 ************************************ 00:13:33.619 START TEST nvme_xnvme 00:13:33.619 ************************************ 00:13:33.619 18:20:19 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:33.619 * Looking for test storage... 00:13:33.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:33.619 18:20:19 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.619 18:20:19 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.619 18:20:19 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.619 18:20:19 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.619 18:20:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.619 18:20:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.619 18:20:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.619 18:20:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:33.619 18:20:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.619 18:20:19 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:33.619 18:20:19 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:33.619 18:20:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.619 18:20:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.619 ************************************ 00:13:33.619 START TEST xnvme_to_malloc_dd_copy 00:13:33.619 ************************************ 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:33.619 18:20:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:33.879 { 00:13:33.879 "subsystems": [ 00:13:33.879 { 00:13:33.879 "subsystem": "bdev", 00:13:33.879 "config": [ 00:13:33.879 { 00:13:33.879 "params": { 00:13:33.879 "block_size": 512, 00:13:33.879 "num_blocks": 2097152, 00:13:33.879 "name": "malloc0" 00:13:33.879 }, 00:13:33.879 "method": "bdev_malloc_create" 00:13:33.879 }, 00:13:33.879 { 00:13:33.879 "params": { 00:13:33.879 "io_mechanism": "libaio", 00:13:33.879 "filename": "/dev/nullb0", 00:13:33.879 "name": "null0" 00:13:33.879 }, 00:13:33.879 "method": "bdev_xnvme_create" 00:13:33.879 }, 00:13:33.879 { 00:13:33.879 "method": "bdev_wait_for_examine" 00:13:33.879 } 00:13:33.879 ] 00:13:33.879 } 00:13:33.879 ] 00:13:33.879 } 00:13:33.879 [2024-07-11 18:20:20.068949] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:33.879 [2024-07-11 18:20:20.069983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85828 ] 00:13:33.879 [2024-07-11 18:20:20.221051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.879 [2024-07-11 18:20:20.264780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.346  Copying: 171/1024 [MB] (171 MBps) Copying: 357/1024 [MB] (185 MBps) Copying: 543/1024 [MB] (185 MBps) Copying: 729/1024 [MB] (186 MBps) Copying: 915/1024 [MB] (185 MBps) Copying: 1024/1024 [MB] (average 183 MBps) 00:13:40.346 00:13:40.346 18:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:40.346 18:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:40.346 18:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:40.346 18:20:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:40.346 { 00:13:40.346 "subsystems": [ 00:13:40.346 { 00:13:40.346 "subsystem": "bdev", 00:13:40.346 "config": [ 00:13:40.346 { 00:13:40.346 "params": { 00:13:40.346 "block_size": 512, 00:13:40.346 "num_blocks": 2097152, 00:13:40.346 "name": "malloc0" 00:13:40.346 }, 00:13:40.346 "method": "bdev_malloc_create" 00:13:40.346 }, 00:13:40.346 { 00:13:40.346 "params": { 00:13:40.346 "io_mechanism": "libaio", 00:13:40.346 "filename": "/dev/nullb0", 00:13:40.346 "name": "null0" 00:13:40.346 }, 00:13:40.346 "method": "bdev_xnvme_create" 00:13:40.346 }, 00:13:40.346 { 00:13:40.346 "method": "bdev_wait_for_examine" 00:13:40.346 } 00:13:40.346 ] 00:13:40.346 } 00:13:40.346 ] 00:13:40.346 } 00:13:40.346 [2024-07-11 18:20:26.634302] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:40.346 [2024-07-11 18:20:26.634516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85904 ] 00:13:40.606 [2024-07-11 18:20:26.781220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.606 [2024-07-11 18:20:26.818765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.559  Copying: 189/1024 [MB] (189 MBps) Copying: 380/1024 [MB] (190 MBps) Copying: 566/1024 [MB] (186 MBps) Copying: 753/1024 [MB] (187 MBps) Copying: 940/1024 [MB] (187 MBps) Copying: 1024/1024 [MB] (average 188 MBps) 00:13:46.559 00:13:46.559 18:20:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:46.559 18:20:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:46.559 18:20:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:46.559 18:20:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:46.559 18:20:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:46.559 18:20:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:46.559 { 00:13:46.559 "subsystems": [ 00:13:46.559 { 00:13:46.559 "subsystem": "bdev", 00:13:46.559 "config": [ 00:13:46.559 { 00:13:46.559 "params": { 00:13:46.559 "block_size": 512, 00:13:46.559 "num_blocks": 2097152, 00:13:46.559 "name": "malloc0" 00:13:46.559 }, 00:13:46.559 "method": "bdev_malloc_create" 00:13:46.559 }, 00:13:46.559 { 00:13:46.559 "params": { 00:13:46.559 "io_mechanism": "io_uring", 00:13:46.559 "filename": "/dev/nullb0", 00:13:46.559 "name": "null0" 00:13:46.559 }, 00:13:46.559 "method": "bdev_xnvme_create" 00:13:46.559 }, 00:13:46.559 { 00:13:46.559 "method": "bdev_wait_for_examine" 00:13:46.559 } 00:13:46.559 ] 00:13:46.559 } 00:13:46.559 ] 00:13:46.559 } 00:13:46.816 [2024-07-11 18:20:33.003972] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:46.816 [2024-07-11 18:20:33.004191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85987 ] 00:13:46.816 [2024-07-11 18:20:33.148993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.816 [2024-07-11 18:20:33.183177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.825  Copying: 193/1024 [MB] (193 MBps) Copying: 388/1024 [MB] (194 MBps) Copying: 581/1024 [MB] (193 MBps) Copying: 775/1024 [MB] (194 MBps) Copying: 975/1024 [MB] (199 MBps) Copying: 1024/1024 [MB] (average 195 MBps) 00:13:52.825 00:13:52.825 18:20:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:52.825 18:20:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:52.825 18:20:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:52.825 18:20:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:52.825 { 00:13:52.825 "subsystems": [ 00:13:52.825 { 00:13:52.825 "subsystem": "bdev", 00:13:52.825 "config": [ 00:13:52.825 { 00:13:52.825 "params": { 00:13:52.825 "block_size": 512, 00:13:52.825 "num_blocks": 2097152, 00:13:52.825 "name": "malloc0" 00:13:52.825 }, 00:13:52.825 "method": "bdev_malloc_create" 00:13:52.825 }, 00:13:52.825 { 00:13:52.825 "params": { 00:13:52.825 "io_mechanism": "io_uring", 00:13:52.825 "filename": "/dev/nullb0", 00:13:52.825 "name": "null0" 00:13:52.825 }, 00:13:52.825 "method": "bdev_xnvme_create" 00:13:52.825 }, 00:13:52.825 { 00:13:52.825 "method": "bdev_wait_for_examine" 00:13:52.825 } 00:13:52.825 ] 00:13:52.825 } 00:13:52.825 ] 00:13:52.825 } 00:13:52.825 [2024-07-11 18:20:39.140070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:52.825 [2024-07-11 18:20:39.140952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86058 ] 00:13:53.083 [2024-07-11 18:20:39.289022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.083 [2024-07-11 18:20:39.324074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.029  Copying: 200/1024 [MB] (200 MBps) Copying: 383/1024 [MB] (183 MBps) Copying: 581/1024 [MB] (197 MBps) Copying: 778/1024 [MB] (196 MBps) Copying: 975/1024 [MB] (196 MBps) Copying: 1024/1024 [MB] (average 195 MBps) 00:13:59.029 00:13:59.029 18:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:59.029 18:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:59.029 ************************************ 00:13:59.029 END TEST xnvme_to_malloc_dd_copy 00:13:59.029 ************************************ 00:13:59.029 00:13:59.029 real 0m25.288s 00:13:59.029 user 0m20.372s 00:13:59.029 sys 0m4.423s 00:13:59.029 18:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.029 18:20:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:59.029 18:20:45 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:13:59.029 18:20:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:59.029 18:20:45 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:59.029 18:20:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.029 18:20:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.029 ************************************ 00:13:59.029 START TEST xnvme_bdevperf 00:13:59.029 ************************************ 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:59.029 18:20:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.029 { 00:13:59.029 "subsystems": [ 00:13:59.029 { 00:13:59.029 "subsystem": "bdev", 00:13:59.029 "config": [ 00:13:59.029 { 00:13:59.029 "params": { 00:13:59.029 "io_mechanism": "libaio", 00:13:59.029 "filename": "/dev/nullb0", 00:13:59.029 "name": "null0" 00:13:59.029 }, 00:13:59.029 "method": "bdev_xnvme_create" 00:13:59.029 }, 00:13:59.029 { 00:13:59.029 "method": "bdev_wait_for_examine" 00:13:59.029 } 00:13:59.029 ] 00:13:59.029 } 00:13:59.029 ] 00:13:59.029 } 00:13:59.029 [2024-07-11 18:20:45.405663] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:59.029 [2024-07-11 18:20:45.405836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86155 ] 00:13:59.288 [2024-07-11 18:20:45.553365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.288 [2024-07-11 18:20:45.587573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.288 Running I/O for 5 seconds... 00:14:04.552 00:14:04.552 Latency(us) 00:14:04.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.552 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:04.552 null0 : 5.00 133340.88 520.86 0.00 0.00 476.95 134.98 997.93 00:14:04.552 =================================================================================================================== 00:14:04.552 Total : 133340.88 520.86 0.00 0.00 476.95 134.98 997.93 00:14:04.552 18:20:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:04.552 18:20:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:04.552 18:20:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:04.552 18:20:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:04.552 18:20:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:04.552 18:20:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:04.552 { 00:14:04.552 "subsystems": [ 00:14:04.552 { 00:14:04.552 "subsystem": "bdev", 00:14:04.552 "config": [ 00:14:04.552 { 00:14:04.552 "params": { 00:14:04.552 "io_mechanism": "io_uring", 00:14:04.552 "filename": "/dev/nullb0", 00:14:04.552 "name": "null0" 00:14:04.552 }, 00:14:04.552 "method": "bdev_xnvme_create" 00:14:04.552 }, 00:14:04.552 { 00:14:04.552 "method": "bdev_wait_for_examine" 00:14:04.552 } 00:14:04.552 ] 00:14:04.552 } 00:14:04.552 ] 00:14:04.552 } 00:14:04.552 [2024-07-11 18:20:50.953555] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:04.552 [2024-07-11 18:20:50.953743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86218 ] 00:14:04.810 [2024-07-11 18:20:51.100532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.810 [2024-07-11 18:20:51.137954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.810 Running I/O for 5 seconds... 00:14:10.078 00:14:10.078 Latency(us) 00:14:10.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.078 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:10.078 null0 : 5.00 170001.13 664.07 0.00 0.00 373.48 314.65 741.00 00:14:10.078 =================================================================================================================== 00:14:10.078 Total : 170001.13 664.07 0.00 0.00 373.48 314.65 741.00 00:14:10.078 18:20:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:10.078 18:20:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:10.078 00:14:10.078 real 0m11.146s 00:14:10.078 user 0m8.287s 00:14:10.078 sys 0m2.657s 00:14:10.078 18:20:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.078 18:20:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:10.078 ************************************ 00:14:10.078 END TEST xnvme_bdevperf 00:14:10.078 ************************************ 00:14:10.078 18:20:56 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:10.078 00:14:10.078 real 0m36.631s 00:14:10.078 user 0m28.728s 00:14:10.078 sys 0m7.194s 00:14:10.078 ************************************ 00:14:10.078 END TEST nvme_xnvme 00:14:10.078 ************************************ 00:14:10.078 18:20:56 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.078 18:20:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 18:20:56 -- common/autotest_common.sh@1142 -- # return 0 00:14:10.337 18:20:56 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:10.337 18:20:56 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.337 18:20:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.337 18:20:56 -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 ************************************ 00:14:10.337 START TEST blockdev_xnvme 00:14:10.337 ************************************ 00:14:10.337 18:20:56 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:10.337 * Looking for test storage... 00:14:10.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=86348 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:10.337 18:20:56 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 86348 00:14:10.337 18:20:56 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 86348 ']' 00:14:10.337 18:20:56 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.337 18:20:56 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.337 18:20:56 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.337 18:20:56 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.337 18:20:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.337 [2024-07-11 18:20:56.738333] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:10.337 [2024-07-11 18:20:56.738586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86348 ] 00:14:10.596 [2024-07-11 18:20:56.884556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.596 [2024-07-11 18:20:56.919646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.540 18:20:57 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.540 18:20:57 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:14:11.540 18:20:57 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:14:11.540 18:20:57 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:14:11.540 18:20:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:11.540 18:20:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:11.540 18:20:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:11.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.815 Waiting for block devices as requested 00:14:11.815 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:12.074 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:12.074 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:12.074 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:17.344 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:17.344 nvme0n1 00:14:17.344 nvme1n1 00:14:17.344 nvme2n1 00:14:17.344 nvme2n2 00:14:17.344 nvme2n3 00:14:17.344 nvme3n1 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.344 18:21:03 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:14:17.344 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:14:17.345 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "db5c42e6-0ce8-4034-8a75-520e1c27d072"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "db5c42e6-0ce8-4034-8a75-520e1c27d072",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "95ed11d4-efb8-41b7-8670-34d14954a12a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "95ed11d4-efb8-41b7-8670-34d14954a12a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "91e46d94-9960-45c9-b97e-ef950460b507"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91e46d94-9960-45c9-b97e-ef950460b507",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "231ef4a2-144d-4dda-aaf7-43252700a6e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "231ef4a2-144d-4dda-aaf7-43252700a6e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "17a1fccc-b968-42ff-abba-3da3ebd9f21c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "17a1fccc-b968-42ff-abba-3da3ebd9f21c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a191e82f-ef7a-469b-a429-c7907a8621e2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a191e82f-ef7a-469b-a429-c7907a8621e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:17.604 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:14:17.604 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:14:17.604 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:14:17.604 18:21:03 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 86348 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 86348 ']' 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 86348 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86348 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.604 killing process with pid 86348 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86348' 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 86348 00:14:17.604 18:21:03 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 86348 00:14:17.863 18:21:04 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:17.863 18:21:04 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:17.863 18:21:04 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:14:17.863 18:21:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.863 18:21:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.863 ************************************ 00:14:17.863 START TEST bdev_hello_world 00:14:17.863 ************************************ 00:14:17.863 18:21:04 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:17.863 [2024-07-11 18:21:04.200869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:17.864 [2024-07-11 18:21:04.201068] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86692 ] 00:14:18.123 [2024-07-11 18:21:04.348867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.123 [2024-07-11 18:21:04.385410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.383 [2024-07-11 18:21:04.545608] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:18.383 [2024-07-11 18:21:04.545680] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:18.383 [2024-07-11 18:21:04.545717] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:18.383 [2024-07-11 18:21:04.548001] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:18.383 [2024-07-11 18:21:04.548402] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:18.383 [2024-07-11 18:21:04.548440] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:18.383 [2024-07-11 18:21:04.548734] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:18.383 00:14:18.383 [2024-07-11 18:21:04.548773] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:18.383 00:14:18.383 real 0m0.640s 00:14:18.383 user 0m0.363s 00:14:18.383 sys 0m0.169s 00:14:18.383 18:21:04 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.383 18:21:04 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:18.383 ************************************ 00:14:18.383 END TEST bdev_hello_world 00:14:18.383 ************************************ 00:14:18.383 18:21:04 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:18.383 18:21:04 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:14:18.383 18:21:04 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:18.383 18:21:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.383 18:21:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.643 ************************************ 00:14:18.643 START TEST bdev_bounds 00:14:18.643 ************************************ 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=86723 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:18.643 Process bdevio pid: 86723 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 86723' 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 86723 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 86723 ']' 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.643 18:21:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:18.643 [2024-07-11 18:21:04.876500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:18.643 [2024-07-11 18:21:04.876665] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86723 ] 00:14:18.643 [2024-07-11 18:21:05.019242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.902 [2024-07-11 18:21:05.058347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.902 [2024-07-11 18:21:05.058433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.902 [2024-07-11 18:21:05.058503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.471 18:21:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.471 18:21:05 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:14:19.471 18:21:05 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:19.730 I/O targets: 00:14:19.730 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:19.730 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:19.730 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:19.730 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:19.730 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:19.730 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:19.730 00:14:19.730 00:14:19.730 CUnit - A unit testing framework for C - Version 2.1-3 00:14:19.730 http://cunit.sourceforge.net/ 00:14:19.730 00:14:19.730 00:14:19.730 Suite: bdevio tests on: nvme3n1 00:14:19.730 Test: blockdev write read block ...passed 00:14:19.730 Test: blockdev write zeroes read block ...passed 00:14:19.730 Test: blockdev write zeroes read no split ...passed 00:14:19.730 Test: blockdev write zeroes read split ...passed 00:14:19.730 Test: blockdev write zeroes read split partial ...passed 00:14:19.730 Test: blockdev reset ...passed 00:14:19.730 Test: blockdev write read 8 blocks ...passed 00:14:19.730 Test: blockdev write read size > 128k ...passed 00:14:19.730 Test: blockdev write read invalid size ...passed 00:14:19.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.730 Test: blockdev write read max offset ...passed 00:14:19.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.730 Test: blockdev writev readv 8 blocks ...passed 00:14:19.730 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.730 Test: blockdev writev readv block ...passed 00:14:19.730 Test: blockdev writev readv size > 128k ...passed 00:14:19.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.730 Test: blockdev comparev and writev ...passed 00:14:19.730 Test: blockdev nvme passthru rw ...passed 00:14:19.730 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.730 Test: blockdev nvme admin passthru ...passed 00:14:19.730 Test: blockdev copy ...passed 00:14:19.730 Suite: bdevio tests on: nvme2n3 00:14:19.730 Test: blockdev write read block ...passed 00:14:19.730 Test: blockdev write zeroes read block ...passed 00:14:19.730 Test: blockdev write zeroes read no split ...passed 00:14:19.730 Test: blockdev write zeroes read split ...passed 00:14:19.730 Test: blockdev write zeroes read split partial ...passed 00:14:19.730 Test: blockdev reset ...passed 00:14:19.730 Test: blockdev write read 8 blocks ...passed 00:14:19.730 Test: blockdev write read size > 128k ...passed 00:14:19.730 Test: blockdev write read invalid size ...passed 00:14:19.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.730 Test: blockdev write read max offset ...passed 00:14:19.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.730 Test: blockdev writev readv 8 blocks ...passed 00:14:19.730 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.730 Test: blockdev writev readv block ...passed 00:14:19.730 Test: blockdev writev readv size > 128k ...passed 00:14:19.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.730 Test: blockdev comparev and writev ...passed 00:14:19.730 Test: blockdev nvme passthru rw ...passed 00:14:19.730 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.730 Test: blockdev nvme admin passthru ...passed 00:14:19.730 Test: blockdev copy ...passed 00:14:19.730 Suite: bdevio tests on: nvme2n2 00:14:19.730 Test: blockdev write read block ...passed 00:14:19.730 Test: blockdev write zeroes read block ...passed 00:14:19.730 Test: blockdev write zeroes read no split ...passed 00:14:19.730 Test: blockdev write zeroes read split ...passed 00:14:19.730 Test: blockdev write zeroes read split partial ...passed 00:14:19.730 Test: blockdev reset ...passed 00:14:19.730 Test: blockdev write read 8 blocks ...passed 00:14:19.730 Test: blockdev write read size > 128k ...passed 00:14:19.730 Test: blockdev write read invalid size ...passed 00:14:19.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.730 Test: blockdev write read max offset ...passed 00:14:19.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.730 Test: blockdev writev readv 8 blocks ...passed 00:14:19.730 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.730 Test: blockdev writev readv block ...passed 00:14:19.730 Test: blockdev writev readv size > 128k ...passed 00:14:19.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.730 Test: blockdev comparev and writev ...passed 00:14:19.730 Test: blockdev nvme passthru rw ...passed 00:14:19.730 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.730 Test: blockdev nvme admin passthru ...passed 00:14:19.730 Test: blockdev copy ...passed 00:14:19.730 Suite: bdevio tests on: nvme2n1 00:14:19.730 Test: blockdev write read block ...passed 00:14:19.730 Test: blockdev write zeroes read block ...passed 00:14:19.730 Test: blockdev write zeroes read no split ...passed 00:14:19.730 Test: blockdev write zeroes read split ...passed 00:14:19.730 Test: blockdev write zeroes read split partial ...passed 00:14:19.730 Test: blockdev reset ...passed 00:14:19.730 Test: blockdev write read 8 blocks ...passed 00:14:19.730 Test: blockdev write read size > 128k ...passed 00:14:19.730 Test: blockdev write read invalid size ...passed 00:14:19.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.731 Test: blockdev write read max offset ...passed 00:14:19.731 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.731 Test: blockdev writev readv 8 blocks ...passed 00:14:19.731 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.731 Test: blockdev writev readv block ...passed 00:14:19.731 Test: blockdev writev readv size > 128k ...passed 00:14:19.731 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.731 Test: blockdev comparev and writev ...passed 00:14:19.731 Test: blockdev nvme passthru rw ...passed 00:14:19.731 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.731 Test: blockdev nvme admin passthru ...passed 00:14:19.731 Test: blockdev copy ...passed 00:14:19.731 Suite: bdevio tests on: nvme1n1 00:14:19.731 Test: blockdev write read block ...passed 00:14:19.731 Test: blockdev write zeroes read block ...passed 00:14:19.731 Test: blockdev write zeroes read no split ...passed 00:14:19.731 Test: blockdev write zeroes read split ...passed 00:14:19.731 Test: blockdev write zeroes read split partial ...passed 00:14:19.731 Test: blockdev reset ...passed 00:14:19.731 Test: blockdev write read 8 blocks ...passed 00:14:19.731 Test: blockdev write read size > 128k ...passed 00:14:19.731 Test: blockdev write read invalid size ...passed 00:14:19.731 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.731 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.731 Test: blockdev write read max offset ...passed 00:14:19.731 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.731 Test: blockdev writev readv 8 blocks ...passed 00:14:19.731 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.731 Test: blockdev writev readv block ...passed 00:14:19.731 Test: blockdev writev readv size > 128k ...passed 00:14:19.731 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.731 Test: blockdev comparev and writev ...passed 00:14:19.731 Test: blockdev nvme passthru rw ...passed 00:14:19.731 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.731 Test: blockdev nvme admin passthru ...passed 00:14:19.731 Test: blockdev copy ...passed 00:14:19.731 Suite: bdevio tests on: nvme0n1 00:14:19.731 Test: blockdev write read block ...passed 00:14:19.731 Test: blockdev write zeroes read block ...passed 00:14:19.731 Test: blockdev write zeroes read no split ...passed 00:14:19.731 Test: blockdev write zeroes read split ...passed 00:14:19.731 Test: blockdev write zeroes read split partial ...passed 00:14:19.731 Test: blockdev reset ...passed 00:14:19.731 Test: blockdev write read 8 blocks ...passed 00:14:19.731 Test: blockdev write read size > 128k ...passed 00:14:19.731 Test: blockdev write read invalid size ...passed 00:14:19.731 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.731 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.731 Test: blockdev write read max offset ...passed 00:14:19.731 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.731 Test: blockdev writev readv 8 blocks ...passed 00:14:19.731 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.731 Test: blockdev writev readv block ...passed 00:14:19.731 Test: blockdev writev readv size > 128k ...passed 00:14:19.731 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.731 Test: blockdev comparev and writev ...passed 00:14:19.731 Test: blockdev nvme passthru rw ...passed 00:14:19.731 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.731 Test: blockdev nvme admin passthru ...passed 00:14:19.731 Test: blockdev copy ...passed 00:14:19.731 00:14:19.731 Run Summary: Type Total Ran Passed Failed Inactive 00:14:19.731 suites 6 6 n/a 0 0 00:14:19.731 tests 138 138 138 0 0 00:14:19.731 asserts 780 780 780 0 n/a 00:14:19.731 00:14:19.731 Elapsed time = 0.295 seconds 00:14:19.731 0 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 86723 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 86723 ']' 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 86723 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86723 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:19.731 killing process with pid 86723 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86723' 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 86723 00:14:19.731 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 86723 00:14:19.991 18:21:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:14:19.991 00:14:19.991 real 0m1.479s 00:14:19.991 user 0m3.811s 00:14:19.991 sys 0m0.293s 00:14:19.991 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.991 ************************************ 00:14:19.991 END TEST bdev_bounds 00:14:19.991 ************************************ 00:14:19.991 18:21:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:19.991 18:21:06 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:19.991 18:21:06 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:19.991 18:21:06 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:19.991 18:21:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.991 18:21:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.991 ************************************ 00:14:19.991 START TEST bdev_nbd 00:14:19.991 ************************************ 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=86766 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 86766 /var/tmp/spdk-nbd.sock 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 86766 ']' 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:19.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.991 18:21:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:20.250 [2024-07-11 18:21:06.427464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:20.250 [2024-07-11 18:21:06.427616] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.250 [2024-07-11 18:21:06.573737] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.250 [2024-07-11 18:21:06.609404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:21.186 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:21.187 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:21.187 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:21.187 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.445 1+0 records in 00:14:21.445 1+0 records out 00:14:21.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414975 s, 9.9 MB/s 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:21.445 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:21.446 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:21.446 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:21.446 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:21.705 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:21.705 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.706 1+0 records in 00:14:21.706 1+0 records out 00:14:21.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000821049 s, 5.0 MB/s 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:21.706 18:21:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.965 1+0 records in 00:14:21.965 1+0 records out 00:14:21.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463416 s, 8.8 MB/s 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:21.965 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.224 1+0 records in 00:14:22.224 1+0 records out 00:14:22.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692403 s, 5.9 MB/s 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:22.224 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.483 1+0 records in 00:14:22.483 1+0 records out 00:14:22.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000883587 s, 4.6 MB/s 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:22.483 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:22.741 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:22.741 18:21:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.741 1+0 records in 00:14:22.741 1+0 records out 00:14:22.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000886221 s, 4.6 MB/s 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:22.741 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd0", 00:14:23.000 "bdev_name": "nvme0n1" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd1", 00:14:23.000 "bdev_name": "nvme1n1" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd2", 00:14:23.000 "bdev_name": "nvme2n1" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd3", 00:14:23.000 "bdev_name": "nvme2n2" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd4", 00:14:23.000 "bdev_name": "nvme2n3" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd5", 00:14:23.000 "bdev_name": "nvme3n1" 00:14:23.000 } 00:14:23.000 ]' 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd0", 00:14:23.000 "bdev_name": "nvme0n1" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd1", 00:14:23.000 "bdev_name": "nvme1n1" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd2", 00:14:23.000 "bdev_name": "nvme2n1" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd3", 00:14:23.000 "bdev_name": "nvme2n2" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd4", 00:14:23.000 "bdev_name": "nvme2n3" 00:14:23.000 }, 00:14:23.000 { 00:14:23.000 "nbd_device": "/dev/nbd5", 00:14:23.000 "bdev_name": "nvme3n1" 00:14:23.000 } 00:14:23.000 ]' 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.000 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:23.259 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.259 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.259 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.259 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.259 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.259 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.259 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:23.260 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.260 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.260 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:23.518 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.518 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.518 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.518 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.518 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.518 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.519 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:23.519 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.519 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.519 18:21:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.777 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.036 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.295 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:24.554 18:21:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:24.812 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:25.070 /dev/nbd0 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.070 1+0 records in 00:14:25.070 1+0 records out 00:14:25.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510907 s, 8.0 MB/s 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:25.070 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:25.328 /dev/nbd1 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.328 1+0 records in 00:14:25.328 1+0 records out 00:14:25.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747074 s, 5.5 MB/s 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:25.328 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:25.586 /dev/nbd10 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.586 1+0 records in 00:14:25.586 1+0 records out 00:14:25.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113155 s, 3.6 MB/s 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:25.586 18:21:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:25.844 /dev/nbd11 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.844 1+0 records in 00:14:25.844 1+0 records out 00:14:25.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874676 s, 4.7 MB/s 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:25.844 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:26.103 /dev/nbd12 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.103 1+0 records in 00:14:26.103 1+0 records out 00:14:26.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590925 s, 6.9 MB/s 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:26.103 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:26.361 /dev/nbd13 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.361 1+0 records in 00:14:26.361 1+0 records out 00:14:26.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111294 s, 3.7 MB/s 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.361 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:26.619 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd0", 00:14:26.619 "bdev_name": "nvme0n1" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd1", 00:14:26.619 "bdev_name": "nvme1n1" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd10", 00:14:26.619 "bdev_name": "nvme2n1" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd11", 00:14:26.619 "bdev_name": "nvme2n2" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd12", 00:14:26.619 "bdev_name": "nvme2n3" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd13", 00:14:26.619 "bdev_name": "nvme3n1" 00:14:26.619 } 00:14:26.619 ]' 00:14:26.619 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd0", 00:14:26.619 "bdev_name": "nvme0n1" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd1", 00:14:26.619 "bdev_name": "nvme1n1" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd10", 00:14:26.619 "bdev_name": "nvme2n1" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd11", 00:14:26.619 "bdev_name": "nvme2n2" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd12", 00:14:26.619 "bdev_name": "nvme2n3" 00:14:26.619 }, 00:14:26.619 { 00:14:26.619 "nbd_device": "/dev/nbd13", 00:14:26.619 "bdev_name": "nvme3n1" 00:14:26.619 } 00:14:26.619 ]' 00:14:26.619 18:21:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:26.619 /dev/nbd1 00:14:26.619 /dev/nbd10 00:14:26.619 /dev/nbd11 00:14:26.619 /dev/nbd12 00:14:26.619 /dev/nbd13' 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:26.619 /dev/nbd1 00:14:26.619 /dev/nbd10 00:14:26.619 /dev/nbd11 00:14:26.619 /dev/nbd12 00:14:26.619 /dev/nbd13' 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:26.619 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:26.620 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:26.620 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:26.620 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:26.620 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:26.877 256+0 records in 00:14:26.877 256+0 records out 00:14:26.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077868 s, 135 MB/s 00:14:26.877 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:26.877 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:26.877 256+0 records in 00:14:26.877 256+0 records out 00:14:26.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15438 s, 6.8 MB/s 00:14:26.877 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:26.877 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:27.134 256+0 records in 00:14:27.134 256+0 records out 00:14:27.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.190885 s, 5.5 MB/s 00:14:27.134 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:27.134 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:27.134 256+0 records in 00:14:27.134 256+0 records out 00:14:27.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15 s, 7.0 MB/s 00:14:27.134 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:27.134 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:27.393 256+0 records in 00:14:27.393 256+0 records out 00:14:27.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14801 s, 7.1 MB/s 00:14:27.393 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:27.393 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:27.651 256+0 records in 00:14:27.651 256+0 records out 00:14:27.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149796 s, 7.0 MB/s 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:27.651 256+0 records in 00:14:27.651 256+0 records out 00:14:27.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125914 s, 8.3 MB/s 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:27.651 18:21:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:27.651 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:27.651 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:27.651 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.652 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:27.909 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:27.909 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:27.909 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:27.910 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:27.910 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:27.910 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:27.910 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:27.910 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:27.910 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:27.910 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:28.168 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:28.426 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.427 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.690 18:21:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.949 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:29.208 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.466 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:29.466 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.466 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:29.466 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:29.466 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:29.466 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:29.728 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:29.729 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:29.729 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:29.729 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.729 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:29.729 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:29.729 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:29.729 18:21:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:30.008 malloc_lvol_verify 00:14:30.008 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:30.008 4e3a00e8-6bd1-40ef-a6fa-27681a59268e 00:14:30.008 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:30.280 efb4bacc-d4fe-4bec-95d5-3a11d778c55b 00:14:30.280 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:30.538 /dev/nbd0 00:14:30.538 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:30.538 mke2fs 1.46.5 (30-Dec-2021) 00:14:30.538 Discarding device blocks: 0/4096 done 00:14:30.538 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:30.538 00:14:30.538 Allocating group tables: 0/1 done 00:14:30.538 Writing inode tables: 0/1 done 00:14:30.538 Creating journal (1024 blocks): done 00:14:30.539 Writing superblocks and filesystem accounting information: 0/1 done 00:14:30.539 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.539 18:21:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 86766 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 86766 ']' 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 86766 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86766 00:14:30.798 killing process with pid 86766 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86766' 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 86766 00:14:30.798 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 86766 00:14:31.057 18:21:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:31.057 00:14:31.057 real 0m11.055s 00:14:31.057 user 0m15.823s 00:14:31.057 sys 0m3.960s 00:14:31.057 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.057 18:21:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:31.057 ************************************ 00:14:31.057 END TEST bdev_nbd 00:14:31.057 ************************************ 00:14:31.057 18:21:17 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:31.057 18:21:17 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:31.057 18:21:17 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:14:31.057 18:21:17 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:14:31.057 18:21:17 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:31.057 18:21:17 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.057 18:21:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.057 18:21:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.057 ************************************ 00:14:31.057 START TEST bdev_fio 00:14:31.057 ************************************ 00:14:31.057 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:31.057 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:31.316 ************************************ 00:14:31.316 START TEST bdev_fio_rw_verify 00:14:31.316 ************************************ 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:31.316 18:21:17 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:31.575 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:31.575 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:31.575 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:31.575 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:31.575 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:31.576 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:31.576 fio-3.35 00:14:31.576 Starting 6 threads 00:14:43.780 00:14:43.780 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=87173: Thu Jul 11 18:21:28 2024 00:14:43.780 read: IOPS=27.7k, BW=108MiB/s (114MB/s)(1083MiB/10001msec) 00:14:43.780 slat (usec): min=3, max=1138, avg= 7.32, stdev= 4.74 00:14:43.780 clat (usec): min=110, max=6558, avg=667.20, stdev=240.91 00:14:43.780 lat (usec): min=116, max=6574, avg=674.52, stdev=241.59 00:14:43.780 clat percentiles (usec): 00:14:43.780 | 50.000th=[ 693], 99.000th=[ 1237], 99.900th=[ 1778], 99.990th=[ 5014], 00:14:43.780 | 99.999th=[ 6521] 00:14:43.780 write: IOPS=28.1k, BW=110MiB/s (115MB/s)(1096MiB/10001msec); 0 zone resets 00:14:43.780 slat (usec): min=14, max=4205, avg=27.91, stdev=31.74 00:14:43.780 clat (usec): min=100, max=6801, avg=759.46, stdev=256.90 00:14:43.780 lat (usec): min=118, max=6828, avg=787.37, stdev=259.46 00:14:43.780 clat percentiles (usec): 00:14:43.780 | 50.000th=[ 766], 99.000th=[ 1401], 99.900th=[ 2008], 99.990th=[ 6259], 00:14:43.780 | 99.999th=[ 6783] 00:14:43.780 bw ( KiB/s): min=94340, max=134967, per=99.92%, avg=112108.26, stdev=2112.99, samples=114 00:14:43.780 iops : min=23584, max=33741, avg=28026.74, stdev=528.27, samples=114 00:14:43.780 lat (usec) : 250=2.41%, 500=16.55%, 750=35.79%, 1000=36.17% 00:14:43.780 lat (msec) : 2=8.99%, 4=0.05%, 10=0.04% 00:14:43.780 cpu : usr=60.88%, sys=25.65%, ctx=7396, majf=0, minf=25620 00:14:43.780 IO depths : 1=12.0%, 2=24.4%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:43.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.780 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.780 issued rwts: total=277168,280531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.780 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:43.780 00:14:43.780 Run status group 0 (all jobs): 00:14:43.780 READ: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=1083MiB (1135MB), run=10001-10001msec 00:14:43.780 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1096MiB (1149MB), run=10001-10001msec 00:14:43.780 ----------------------------------------------------- 00:14:43.780 Suppressions used: 00:14:43.780 count bytes template 00:14:43.780 6 48 /usr/src/fio/parse.c 00:14:43.780 3164 303744 /usr/src/fio/iolog.c 00:14:43.780 1 8 libtcmalloc_minimal.so 00:14:43.780 1 904 libcrypto.so 00:14:43.780 ----------------------------------------------------- 00:14:43.780 00:14:43.780 00:14:43.780 real 0m11.219s 00:14:43.780 user 0m37.308s 00:14:43.780 sys 0m15.715s 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:43.780 ************************************ 00:14:43.780 END TEST bdev_fio_rw_verify 00:14:43.780 ************************************ 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:43.780 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "db5c42e6-0ce8-4034-8a75-520e1c27d072"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "db5c42e6-0ce8-4034-8a75-520e1c27d072",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "95ed11d4-efb8-41b7-8670-34d14954a12a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "95ed11d4-efb8-41b7-8670-34d14954a12a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "91e46d94-9960-45c9-b97e-ef950460b507"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91e46d94-9960-45c9-b97e-ef950460b507",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "231ef4a2-144d-4dda-aaf7-43252700a6e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "231ef4a2-144d-4dda-aaf7-43252700a6e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "17a1fccc-b968-42ff-abba-3da3ebd9f21c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "17a1fccc-b968-42ff-abba-3da3ebd9f21c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a191e82f-ef7a-469b-a429-c7907a8621e2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a191e82f-ef7a-469b-a429-c7907a8621e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:43.781 /home/vagrant/spdk_repo/spdk 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:14:43.781 00:14:43.781 real 0m11.414s 00:14:43.781 user 0m37.413s 00:14:43.781 sys 0m15.800s 00:14:43.781 ************************************ 00:14:43.781 END TEST bdev_fio 00:14:43.781 ************************************ 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:43.781 18:21:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:43.781 18:21:28 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:43.781 18:21:28 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:43.781 18:21:28 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:43.781 18:21:28 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:43.781 18:21:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:43.781 18:21:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.781 ************************************ 00:14:43.781 START TEST bdev_verify 00:14:43.781 ************************************ 00:14:43.781 18:21:28 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:43.781 [2024-07-11 18:21:28.986096] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:43.781 [2024-07-11 18:21:28.986330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87345 ] 00:14:43.781 [2024-07-11 18:21:29.137471] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:43.781 [2024-07-11 18:21:29.183561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.781 [2024-07-11 18:21:29.183615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.781 Running I/O for 5 seconds... 00:14:49.045 00:14:49.045 Latency(us) 00:14:49.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.045 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x0 length 0xa0000 00:14:49.045 nvme0n1 : 5.07 1716.44 6.70 0.00 0.00 74436.27 10307.03 75306.82 00:14:49.045 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0xa0000 length 0xa0000 00:14:49.045 nvme0n1 : 5.03 1782.08 6.96 0.00 0.00 71688.54 6970.65 90082.21 00:14:49.045 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x0 length 0xbd0bd 00:14:49.045 nvme1n1 : 5.06 2833.00 11.07 0.00 0.00 44942.44 5213.09 81979.58 00:14:49.045 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:49.045 nvme1n1 : 5.05 2625.51 10.26 0.00 0.00 48373.08 3693.85 135361.63 00:14:49.045 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x0 length 0x80000 00:14:49.045 nvme2n1 : 5.07 1715.40 6.70 0.00 0.00 74109.66 10902.81 79119.83 00:14:49.045 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x80000 length 0x80000 00:14:49.045 nvme2n1 : 5.06 1772.12 6.92 0.00 0.00 71696.63 8281.37 82932.83 00:14:49.045 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x0 length 0x80000 00:14:49.045 nvme2n2 : 5.06 1720.57 6.72 0.00 0.00 73752.89 6434.44 79119.83 00:14:49.045 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x80000 length 0x80000 00:14:49.045 nvme2n2 : 5.06 1794.80 7.01 0.00 0.00 70633.83 2964.01 67204.19 00:14:49.045 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x0 length 0x80000 00:14:49.045 nvme2n3 : 5.06 1718.96 6.71 0.00 0.00 73684.86 9115.46 67204.19 00:14:49.045 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x80000 length 0x80000 00:14:49.045 nvme2n3 : 5.07 1793.49 7.01 0.00 0.00 70535.58 5928.03 74353.57 00:14:49.045 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x0 length 0x20000 00:14:49.045 nvme3n1 : 5.07 1717.63 6.71 0.00 0.00 73615.73 6166.34 72923.69 00:14:49.045 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:49.045 Verification LBA range: start 0x20000 length 0x20000 00:14:49.045 nvme3n1 : 5.07 1792.33 7.00 0.00 0.00 70464.93 8043.05 84839.33 00:14:49.045 =================================================================================================================== 00:14:49.045 Total : 22982.34 89.77 0.00 0.00 66298.86 2964.01 135361.63 00:14:49.045 00:14:49.045 real 0m5.841s 00:14:49.045 user 0m9.039s 00:14:49.045 sys 0m1.683s 00:14:49.045 18:21:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.045 18:21:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:49.045 ************************************ 00:14:49.045 END TEST bdev_verify 00:14:49.045 ************************************ 00:14:49.045 18:21:34 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:49.045 18:21:34 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:49.045 18:21:34 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:49.045 18:21:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.045 18:21:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:49.045 ************************************ 00:14:49.045 START TEST bdev_verify_big_io 00:14:49.045 ************************************ 00:14:49.045 18:21:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:49.045 [2024-07-11 18:21:34.886080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:49.045 [2024-07-11 18:21:34.886293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87428 ] 00:14:49.045 [2024-07-11 18:21:35.031353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:49.045 [2024-07-11 18:21:35.071395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.045 [2024-07-11 18:21:35.071443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.045 Running I/O for 5 seconds... 00:14:55.607 00:14:55.607 Latency(us) 00:14:55.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.607 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x0 length 0xa000 00:14:55.607 nvme0n1 : 5.99 116.22 7.26 0.00 0.00 1073409.14 117249.86 1700599.62 00:14:55.607 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0xa000 length 0xa000 00:14:55.607 nvme0n1 : 5.78 166.08 10.38 0.00 0.00 745863.29 27167.65 1052389.00 00:14:55.607 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x0 length 0xbd0b 00:14:55.607 nvme1n1 : 5.99 120.14 7.51 0.00 0.00 1009573.60 14477.50 1944631.85 00:14:55.607 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:55.607 nvme1n1 : 5.98 128.33 8.02 0.00 0.00 920961.71 90558.84 1006632.96 00:14:55.607 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x0 length 0x8000 00:14:55.607 nvme2n1 : 5.98 143.21 8.95 0.00 0.00 820829.77 93418.59 922746.88 00:14:55.607 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x8000 length 0x8000 00:14:55.607 nvme2n1 : 5.90 94.93 5.93 0.00 0.00 1213023.70 185883.93 1967509.88 00:14:55.607 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x0 length 0x8000 00:14:55.607 nvme2n2 : 6.00 117.41 7.34 0.00 0.00 969515.75 107240.73 1426063.36 00:14:55.607 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x8000 length 0x8000 00:14:55.607 nvme2n2 : 5.90 105.72 6.61 0.00 0.00 1071152.69 101044.60 2059021.96 00:14:55.607 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x0 length 0x8000 00:14:55.607 nvme2n3 : 5.98 123.07 7.69 0.00 0.00 897414.50 86269.21 1609087.53 00:14:55.607 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x8000 length 0x8000 00:14:55.607 nvme2n3 : 5.99 114.86 7.18 0.00 0.00 955789.74 40751.48 2531834.41 00:14:55.607 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x0 length 0x2000 00:14:55.607 nvme3n1 : 6.00 133.31 8.33 0.00 0.00 807610.52 13345.51 1715851.64 00:14:55.607 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:55.607 Verification LBA range: start 0x2000 length 0x2000 00:14:55.607 nvme3n1 : 6.00 133.28 8.33 0.00 0.00 801259.46 4825.83 1685347.61 00:14:55.607 =================================================================================================================== 00:14:55.607 Total : 1496.56 93.53 0.00 0.00 924074.44 4825.83 2531834.41 00:14:55.607 00:14:55.607 real 0m6.784s 00:14:55.607 user 0m12.385s 00:14:55.607 sys 0m0.485s 00:14:55.607 ************************************ 00:14:55.607 END TEST bdev_verify_big_io 00:14:55.607 ************************************ 00:14:55.607 18:21:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.607 18:21:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.607 18:21:41 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:55.607 18:21:41 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:55.607 18:21:41 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:55.607 18:21:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.607 18:21:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.607 ************************************ 00:14:55.607 START TEST bdev_write_zeroes 00:14:55.607 ************************************ 00:14:55.607 18:21:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:55.607 [2024-07-11 18:21:41.742258] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:55.607 [2024-07-11 18:21:41.742480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87527 ] 00:14:55.607 [2024-07-11 18:21:41.897105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.607 [2024-07-11 18:21:41.942798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.865 Running I/O for 1 seconds... 00:14:56.799 00:14:56.799 Latency(us) 00:14:56.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.799 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:56.799 nvme0n1 : 1.00 10449.24 40.82 0.00 0.00 12234.55 7328.12 19065.02 00:14:56.799 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:56.799 nvme1n1 : 1.01 15156.34 59.20 0.00 0.00 8406.67 4319.42 15847.80 00:14:56.799 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:56.799 nvme2n1 : 1.01 10473.80 40.91 0.00 0.00 12147.89 6523.81 20971.52 00:14:56.799 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:56.799 nvme2n2 : 1.02 10458.44 40.85 0.00 0.00 12142.65 5570.56 20971.52 00:14:56.799 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:56.799 nvme2n3 : 1.02 10442.95 40.79 0.00 0.00 12149.92 5749.29 20971.52 00:14:56.799 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:56.799 nvme3n1 : 1.02 10427.38 40.73 0.00 0.00 12158.99 5868.45 21209.83 00:14:56.799 =================================================================================================================== 00:14:56.799 Total : 67408.15 263.31 0.00 0.00 11323.07 4319.42 21209.83 00:14:57.059 ************************************ 00:14:57.059 END TEST bdev_write_zeroes 00:14:57.059 ************************************ 00:14:57.059 00:14:57.059 real 0m1.720s 00:14:57.059 user 0m0.995s 00:14:57.059 sys 0m0.556s 00:14:57.059 18:21:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.059 18:21:43 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:57.059 18:21:43 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:57.059 18:21:43 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:57.059 18:21:43 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:57.059 18:21:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.059 18:21:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.059 ************************************ 00:14:57.059 START TEST bdev_json_nonenclosed 00:14:57.059 ************************************ 00:14:57.059 18:21:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:57.317 [2024-07-11 18:21:43.511826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:57.317 [2024-07-11 18:21:43.512015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87564 ] 00:14:57.317 [2024-07-11 18:21:43.662732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.317 [2024-07-11 18:21:43.700029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.317 [2024-07-11 18:21:43.700168] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:57.317 [2024-07-11 18:21:43.700202] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:57.317 [2024-07-11 18:21:43.700218] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:57.574 00:14:57.574 real 0m0.388s 00:14:57.574 user 0m0.163s 00:14:57.574 sys 0m0.121s 00:14:57.574 18:21:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:14:57.574 18:21:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.574 18:21:43 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:57.574 ************************************ 00:14:57.574 END TEST bdev_json_nonenclosed 00:14:57.574 ************************************ 00:14:57.574 18:21:43 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:14:57.574 18:21:43 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:14:57.574 18:21:43 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:57.574 18:21:43 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:57.574 18:21:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.574 18:21:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.574 ************************************ 00:14:57.574 START TEST bdev_json_nonarray 00:14:57.574 ************************************ 00:14:57.574 18:21:43 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:57.574 [2024-07-11 18:21:43.936323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:57.574 [2024-07-11 18:21:43.936785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87588 ] 00:14:57.832 [2024-07-11 18:21:44.077591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.832 [2024-07-11 18:21:44.116681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.832 [2024-07-11 18:21:44.117055] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:57.832 [2024-07-11 18:21:44.117095] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:57.832 [2024-07-11 18:21:44.117133] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:57.832 00:14:57.832 real 0m0.382s 00:14:57.832 user 0m0.180s 00:14:57.832 sys 0m0.097s 00:14:57.832 18:21:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:14:57.832 18:21:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.832 ************************************ 00:14:57.832 18:21:44 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:57.832 END TEST bdev_json_nonarray 00:14:57.832 ************************************ 00:14:58.089 18:21:44 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:14:58.089 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:14:58.089 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:14:58.089 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:14:58.089 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:14:58.089 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:58.089 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:14:58.090 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:58.090 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:58.090 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:58.090 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:58.090 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:58.090 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:58.090 18:21:44 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:58.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:59.587 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.844 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.844 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.844 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:59.844 ************************************ 00:14:59.844 END TEST blockdev_xnvme 00:14:59.844 ************************************ 00:14:59.844 00:14:59.844 real 0m49.645s 00:14:59.844 user 1m28.813s 00:14:59.844 sys 0m26.556s 00:14:59.844 18:21:46 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.844 18:21:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.844 18:21:46 -- common/autotest_common.sh@1142 -- # return 0 00:14:59.844 18:21:46 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:59.844 18:21:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:59.844 18:21:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.844 18:21:46 -- common/autotest_common.sh@10 -- # set +x 00:14:59.844 ************************************ 00:14:59.844 START TEST ublk 00:14:59.845 ************************************ 00:14:59.845 18:21:46 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:00.102 * Looking for test storage... 00:15:00.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:00.102 18:21:46 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:00.102 18:21:46 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:00.102 18:21:46 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:00.102 18:21:46 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:00.102 18:21:46 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:00.102 18:21:46 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:00.102 18:21:46 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:00.102 18:21:46 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:00.102 18:21:46 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:00.102 18:21:46 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:00.102 18:21:46 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.102 18:21:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.102 ************************************ 00:15:00.102 START TEST test_save_ublk_config 00:15:00.102 ************************************ 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=87866 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 87866 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 87866 ']' 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.102 18:21:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:00.102 [2024-07-11 18:21:46.438631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:00.102 [2024-07-11 18:21:46.439092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87866 ] 00:15:00.360 [2024-07-11 18:21:46.587952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.360 [2024-07-11 18:21:46.638734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.973 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.973 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:00.973 18:21:47 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:00.973 18:21:47 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:00.973 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.973 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:00.973 [2024-07-11 18:21:47.373173] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:00.973 [2024-07-11 18:21:47.373500] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:01.231 malloc0 00:15:01.231 [2024-07-11 18:21:47.397270] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:01.231 [2024-07-11 18:21:47.397380] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:01.231 [2024-07-11 18:21:47.397403] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:01.231 [2024-07-11 18:21:47.397417] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:01.231 [2024-07-11 18:21:47.405275] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:01.231 [2024-07-11 18:21:47.405315] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:01.231 [2024-07-11 18:21:47.413117] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:01.231 [2024-07-11 18:21:47.413255] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:01.231 [2024-07-11 18:21:47.428224] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:01.231 0 00:15:01.231 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.231 18:21:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:01.231 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.231 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:01.489 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.489 18:21:47 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:01.489 "subsystems": [ 00:15:01.489 { 00:15:01.489 "subsystem": "keyring", 00:15:01.489 "config": [] 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "subsystem": "iobuf", 00:15:01.489 "config": [ 00:15:01.489 { 00:15:01.489 "method": "iobuf_set_options", 00:15:01.489 "params": { 00:15:01.489 "small_pool_count": 8192, 00:15:01.489 "large_pool_count": 1024, 00:15:01.489 "small_bufsize": 8192, 00:15:01.489 "large_bufsize": 135168 00:15:01.489 } 00:15:01.489 } 00:15:01.489 ] 00:15:01.489 }, 00:15:01.489 { 00:15:01.489 "subsystem": "sock", 00:15:01.489 "config": [ 00:15:01.489 { 00:15:01.489 "method": "sock_set_default_impl", 00:15:01.489 "params": { 00:15:01.489 "impl_name": "posix" 00:15:01.489 } 00:15:01.489 }, 00:15:01.489 { 00:15:01.490 "method": "sock_impl_set_options", 00:15:01.490 "params": { 00:15:01.490 "impl_name": "ssl", 00:15:01.490 "recv_buf_size": 4096, 00:15:01.490 "send_buf_size": 4096, 00:15:01.490 "enable_recv_pipe": true, 00:15:01.490 "enable_quickack": false, 00:15:01.490 "enable_placement_id": 0, 00:15:01.490 "enable_zerocopy_send_server": true, 00:15:01.490 "enable_zerocopy_send_client": false, 00:15:01.490 "zerocopy_threshold": 0, 00:15:01.490 "tls_version": 0, 00:15:01.490 "enable_ktls": false 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "sock_impl_set_options", 00:15:01.490 "params": { 00:15:01.490 "impl_name": "posix", 00:15:01.490 "recv_buf_size": 2097152, 00:15:01.490 "send_buf_size": 2097152, 00:15:01.490 "enable_recv_pipe": true, 00:15:01.490 "enable_quickack": false, 00:15:01.490 "enable_placement_id": 0, 00:15:01.490 "enable_zerocopy_send_server": true, 00:15:01.490 "enable_zerocopy_send_client": false, 00:15:01.490 "zerocopy_threshold": 0, 00:15:01.490 "tls_version": 0, 00:15:01.490 "enable_ktls": false 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "vmd", 00:15:01.490 "config": [] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "accel", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "accel_set_options", 00:15:01.490 "params": { 00:15:01.490 "small_cache_size": 128, 00:15:01.490 "large_cache_size": 16, 00:15:01.490 "task_count": 2048, 00:15:01.490 "sequence_count": 2048, 00:15:01.490 "buf_count": 2048 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "bdev", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "bdev_set_options", 00:15:01.490 "params": { 00:15:01.490 "bdev_io_pool_size": 65535, 00:15:01.490 "bdev_io_cache_size": 256, 00:15:01.490 "bdev_auto_examine": true, 00:15:01.490 "iobuf_small_cache_size": 128, 00:15:01.490 "iobuf_large_cache_size": 16 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_raid_set_options", 00:15:01.490 "params": { 00:15:01.490 "process_window_size_kb": 1024 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_iscsi_set_options", 00:15:01.490 "params": { 00:15:01.490 "timeout_sec": 30 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_nvme_set_options", 00:15:01.490 "params": { 00:15:01.490 "action_on_timeout": "none", 00:15:01.490 "timeout_us": 0, 00:15:01.490 "timeout_admin_us": 0, 00:15:01.490 "keep_alive_timeout_ms": 10000, 00:15:01.490 "arbitration_burst": 0, 00:15:01.490 "low_priority_weight": 0, 00:15:01.490 "medium_priority_weight": 0, 00:15:01.490 "high_priority_weight": 0, 00:15:01.490 "nvme_adminq_poll_period_us": 10000, 00:15:01.490 "nvme_ioq_poll_period_us": 0, 00:15:01.490 "io_queue_requests": 0, 00:15:01.490 "delay_cmd_submit": true, 00:15:01.490 "transport_retry_count": 4, 00:15:01.490 "bdev_retry_count": 3, 00:15:01.490 "transport_ack_timeout": 0, 00:15:01.490 "ctrlr_loss_timeout_sec": 0, 00:15:01.490 "reconnect_delay_sec": 0, 00:15:01.490 "fast_io_fail_timeout_sec": 0, 00:15:01.490 "disable_auto_failback": false, 00:15:01.490 "generate_uuids": false, 00:15:01.490 "transport_tos": 0, 00:15:01.490 "nvme_error_stat": false, 00:15:01.490 "rdma_srq_size": 0, 00:15:01.490 "io_path_stat": false, 00:15:01.490 "allow_accel_sequence": false, 00:15:01.490 "rdma_max_cq_size": 0, 00:15:01.490 "rdma_cm_event_timeout_ms": 0, 00:15:01.490 "dhchap_digests": [ 00:15:01.490 "sha256", 00:15:01.490 "sha384", 00:15:01.490 "sha512" 00:15:01.490 ], 00:15:01.490 "dhchap_dhgroups": [ 00:15:01.490 "null", 00:15:01.490 "ffdhe2048", 00:15:01.490 "ffdhe3072", 00:15:01.490 "ffdhe4096", 00:15:01.490 "ffdhe6144", 00:15:01.490 "ffdhe8192" 00:15:01.490 ] 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_nvme_set_hotplug", 00:15:01.490 "params": { 00:15:01.490 "period_us": 100000, 00:15:01.490 "enable": false 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_malloc_create", 00:15:01.490 "params": { 00:15:01.490 "name": "malloc0", 00:15:01.490 "num_blocks": 8192, 00:15:01.490 "block_size": 4096, 00:15:01.490 "physical_block_size": 4096, 00:15:01.490 "uuid": "5abd036c-0d18-42b5-814d-1f390a19e4d9", 00:15:01.490 "optimal_io_boundary": 0 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "bdev_wait_for_examine" 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "scsi", 00:15:01.490 "config": null 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "scheduler", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "framework_set_scheduler", 00:15:01.490 "params": { 00:15:01.490 "name": "static" 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "vhost_scsi", 00:15:01.490 "config": [] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "vhost_blk", 00:15:01.490 "config": [] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "ublk", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "ublk_create_target", 00:15:01.490 "params": { 00:15:01.490 "cpumask": "1" 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "ublk_start_disk", 00:15:01.490 "params": { 00:15:01.490 "bdev_name": "malloc0", 00:15:01.490 "ublk_id": 0, 00:15:01.490 "num_queues": 1, 00:15:01.490 "queue_depth": 128 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "nbd", 00:15:01.490 "config": [] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "nvmf", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "nvmf_set_config", 00:15:01.490 "params": { 00:15:01.490 "discovery_filter": "match_any", 00:15:01.490 "admin_cmd_passthru": { 00:15:01.490 "identify_ctrlr": false 00:15:01.490 } 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_set_max_subsystems", 00:15:01.490 "params": { 00:15:01.490 "max_subsystems": 1024 00:15:01.490 } 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "method": "nvmf_set_crdt", 00:15:01.490 "params": { 00:15:01.490 "crdt1": 0, 00:15:01.490 "crdt2": 0, 00:15:01.490 "crdt3": 0 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }, 00:15:01.490 { 00:15:01.490 "subsystem": "iscsi", 00:15:01.490 "config": [ 00:15:01.490 { 00:15:01.490 "method": "iscsi_set_options", 00:15:01.490 "params": { 00:15:01.490 "node_base": "iqn.2016-06.io.spdk", 00:15:01.490 "max_sessions": 128, 00:15:01.490 "max_connections_per_session": 2, 00:15:01.490 "max_queue_depth": 64, 00:15:01.490 "default_time2wait": 2, 00:15:01.490 "default_time2retain": 20, 00:15:01.490 "first_burst_length": 8192, 00:15:01.490 "immediate_data": true, 00:15:01.490 "allow_duplicated_isid": false, 00:15:01.490 "error_recovery_level": 0, 00:15:01.490 "nop_timeout": 60, 00:15:01.490 "nop_in_interval": 30, 00:15:01.490 "disable_chap": false, 00:15:01.490 "require_chap": false, 00:15:01.490 "mutual_chap": false, 00:15:01.490 "chap_group": 0, 00:15:01.490 "max_large_datain_per_connection": 64, 00:15:01.490 "max_r2t_per_connection": 4, 00:15:01.490 "pdu_pool_size": 36864, 00:15:01.490 "immediate_data_pool_size": 16384, 00:15:01.490 "data_out_pool_size": 2048 00:15:01.490 } 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 } 00:15:01.490 ] 00:15:01.490 }' 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 87866 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 87866 ']' 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 87866 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87866 00:15:01.490 killing process with pid 87866 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87866' 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 87866 00:15:01.490 18:21:47 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 87866 00:15:01.490 [2024-07-11 18:21:47.882550] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:01.749 [2024-07-11 18:21:47.918212] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:01.750 [2024-07-11 18:21:47.918399] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:01.750 [2024-07-11 18:21:47.927203] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:01.750 [2024-07-11 18:21:47.927266] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:01.750 [2024-07-11 18:21:47.927282] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:01.750 [2024-07-11 18:21:47.927312] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:01.750 [2024-07-11 18:21:47.931278] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=87894 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 87894 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 87894 ']' 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:01.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.750 18:21:48 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:01.750 "subsystems": [ 00:15:01.750 { 00:15:01.750 "subsystem": "keyring", 00:15:01.750 "config": [] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "iobuf", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "iobuf_set_options", 00:15:01.750 "params": { 00:15:01.750 "small_pool_count": 8192, 00:15:01.750 "large_pool_count": 1024, 00:15:01.750 "small_bufsize": 8192, 00:15:01.750 "large_bufsize": 135168 00:15:01.750 } 00:15:01.750 } 00:15:01.750 ] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "sock", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "sock_set_default_impl", 00:15:01.750 "params": { 00:15:01.750 "impl_name": "posix" 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "sock_impl_set_options", 00:15:01.750 "params": { 00:15:01.750 "impl_name": "ssl", 00:15:01.750 "recv_buf_size": 4096, 00:15:01.750 "send_buf_size": 4096, 00:15:01.750 "enable_recv_pipe": true, 00:15:01.750 "enable_quickack": false, 00:15:01.750 "enable_placement_id": 0, 00:15:01.750 "enable_zerocopy_send_server": true, 00:15:01.750 "enable_zerocopy_send_client": false, 00:15:01.750 "zerocopy_threshold": 0, 00:15:01.750 "tls_version": 0, 00:15:01.750 "enable_ktls": false 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "sock_impl_set_options", 00:15:01.750 "params": { 00:15:01.750 "impl_name": "posix", 00:15:01.750 "recv_buf_size": 2097152, 00:15:01.750 "send_buf_size": 2097152, 00:15:01.750 "enable_recv_pipe": true, 00:15:01.750 "enable_quickack": false, 00:15:01.750 "enable_placement_id": 0, 00:15:01.750 "enable_zerocopy_send_server": true, 00:15:01.750 "enable_zerocopy_send_client": false, 00:15:01.750 "zerocopy_threshold": 0, 00:15:01.750 "tls_version": 0, 00:15:01.750 "enable_ktls": false 00:15:01.750 } 00:15:01.750 } 00:15:01.750 ] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "vmd", 00:15:01.750 "config": [] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "accel", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "accel_set_options", 00:15:01.750 "params": { 00:15:01.750 "small_cache_size": 128, 00:15:01.750 "large_cache_size": 16, 00:15:01.750 "task_count": 2048, 00:15:01.750 "sequence_count": 2048, 00:15:01.750 "buf_count": 2048 00:15:01.750 } 00:15:01.750 } 00:15:01.750 ] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "bdev", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "bdev_set_options", 00:15:01.750 "params": { 00:15:01.750 "bdev_io_pool_size": 65535, 00:15:01.750 "bdev_io_cache_size": 256, 00:15:01.750 "bdev_auto_examine": true, 00:15:01.750 "iobuf_small_cache_size": 128, 00:15:01.750 "iobuf_large_cache_size": 16 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "bdev_raid_set_options", 00:15:01.750 "params": { 00:15:01.750 "process_window_size_kb": 1024 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "bdev_iscsi_set_options", 00:15:01.750 "params": { 00:15:01.750 "timeout_sec": 30 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "bdev_nvme_set_options", 00:15:01.750 "params": { 00:15:01.750 "action_on_timeout": "none", 00:15:01.750 "timeout_us": 0, 00:15:01.750 "timeout_admin_us": 0, 00:15:01.750 "keep_alive_timeout_ms": 10000, 00:15:01.750 "arbitration_burst": 0, 00:15:01.750 "low_priority_weight": 0, 00:15:01.750 "medium_priority_weight": 0, 00:15:01.750 "high_priority_weight": 0, 00:15:01.750 "nvme_adminq_poll_period_us": 10000, 00:15:01.750 "nvme_ioq_poll_period_us": 0, 00:15:01.750 "io_queue_requests": 0, 00:15:01.750 "delay_cmd_submit": true, 00:15:01.750 "transport_retry_count": 4, 00:15:01.750 "bdev_retry_count": 3, 00:15:01.750 "transport_ack_timeout": 0, 00:15:01.750 "ctrlr_loss_timeout_sec": 0, 00:15:01.750 "reconnect_delay_sec": 0, 00:15:01.750 "fast_io_fail_timeout_sec": 0, 00:15:01.750 "disable_auto_failback": false, 00:15:01.750 "generate_uuids": false, 00:15:01.750 "transport_tos": 0, 00:15:01.750 "nvme_error_stat": false, 00:15:01.750 "rdma_srq_size": 0, 00:15:01.750 "io_path_stat": false, 00:15:01.750 "allow_accel_sequence": false, 00:15:01.750 "rdma_max_cq_size": 0, 00:15:01.750 "rdma_cm_event_timeout_ms": 0, 00:15:01.750 "dhchap_digests": [ 00:15:01.750 "sha256", 00:15:01.750 "sha384", 00:15:01.750 "sha512" 00:15:01.750 ], 00:15:01.750 "dhchap_dhgroups": [ 00:15:01.750 "null", 00:15:01.750 "ffdhe2048", 00:15:01.750 "ffdhe3072", 00:15:01.750 "ffdhe4096", 00:15:01.750 "ffdhe6144", 00:15:01.750 "ffdhe8192" 00:15:01.750 ] 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "bdev_nvme_set_hotplug", 00:15:01.750 "params": { 00:15:01.750 "period_us": 100000, 00:15:01.750 "enable": false 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "bdev_malloc_create", 00:15:01.750 "params": { 00:15:01.750 "name": "malloc0", 00:15:01.750 "num_blocks": 8192, 00:15:01.750 "block_size": 4096, 00:15:01.750 "physical_block_size": 4096, 00:15:01.750 "uuid": "5abd036c-0d18-42b5-814d-1f390a19e4d9", 00:15:01.750 "optimal_io_boundary": 0 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "bdev_wait_for_examine" 00:15:01.750 } 00:15:01.750 ] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "scsi", 00:15:01.750 "config": null 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "scheduler", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "framework_set_scheduler", 00:15:01.750 "params": { 00:15:01.750 "name": "static" 00:15:01.750 } 00:15:01.750 } 00:15:01.750 ] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "vhost_scsi", 00:15:01.750 "config": [] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "vhost_blk", 00:15:01.750 "config": [] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "ublk", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "ublk_create_target", 00:15:01.750 "params": { 00:15:01.750 "cpumask": "1" 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "ublk_start_disk", 00:15:01.750 "params": { 00:15:01.750 "bdev_name": "malloc0", 00:15:01.750 "ublk_id": 0, 00:15:01.750 "num_queues": 1, 00:15:01.750 "queue_depth": 128 00:15:01.750 } 00:15:01.750 } 00:15:01.750 ] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "nbd", 00:15:01.750 "config": [] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "nvmf", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "nvmf_set_config", 00:15:01.750 "params": { 00:15:01.750 "discovery_filter": "match_any", 00:15:01.750 "admin_cmd_passthru": { 00:15:01.750 "identify_ctrlr": false 00:15:01.750 } 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "nvmf_set_max_subsystems", 00:15:01.750 "params": { 00:15:01.750 "max_subsystems": 1024 00:15:01.750 } 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "method": "nvmf_set_crdt", 00:15:01.750 "params": { 00:15:01.750 "crdt1": 0, 00:15:01.750 "crdt2": 0, 00:15:01.750 "crdt3": 0 00:15:01.750 } 00:15:01.750 } 00:15:01.750 ] 00:15:01.750 }, 00:15:01.750 { 00:15:01.750 "subsystem": "iscsi", 00:15:01.750 "config": [ 00:15:01.750 { 00:15:01.750 "method": "iscsi_set_options", 00:15:01.750 "params": { 00:15:01.750 "node_base": "iqn.2016-06.io.spdk", 00:15:01.750 "max_sessions": 128, 00:15:01.750 "max_connections_per_session": 2, 00:15:01.750 "max_queue_depth": 64, 00:15:01.750 "default_time2wait": 2, 00:15:01.750 "default_time2retain": 20, 00:15:01.750 "fir 18:21:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:01.750 st_burst_length": 8192, 00:15:01.750 "immediate_data": true, 00:15:01.750 "allow_duplicated_isid": false, 00:15:01.750 "error_recovery_level": 0, 00:15:01.751 "nop_timeout": 60, 00:15:01.751 "nop_in_interval": 30, 00:15:01.751 "disable_chap": false, 00:15:01.751 "require_chap": false, 00:15:01.751 "mutual_chap": false, 00:15:01.751 "chap_group": 0, 00:15:01.751 "max_large_datain_per_connection": 64, 00:15:01.751 "max_r2t_per_connection": 4, 00:15:01.751 "pdu_pool_size": 36864, 00:15:01.751 "immediate_data_pool_size": 16384, 00:15:01.751 "data_out_pool_size": 2048 00:15:01.751 } 00:15:01.751 } 00:15:01.751 ] 00:15:01.751 } 00:15:01.751 ] 00:15:01.751 }' 00:15:02.011 [2024-07-11 18:21:48.244168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:02.011 [2024-07-11 18:21:48.244312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87894 ] 00:15:02.011 [2024-07-11 18:21:48.388004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.270 [2024-07-11 18:21:48.434960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.527 [2024-07-11 18:21:48.718161] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:02.527 [2024-07-11 18:21:48.718495] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:02.527 [2024-07-11 18:21:48.726322] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:02.527 [2024-07-11 18:21:48.726408] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:02.527 [2024-07-11 18:21:48.726426] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:02.527 [2024-07-11 18:21:48.726435] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:02.527 [2024-07-11 18:21:48.735288] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:02.528 [2024-07-11 18:21:48.735317] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:02.528 [2024-07-11 18:21:48.742118] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:02.528 [2024-07-11 18:21:48.742237] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:02.528 [2024-07-11 18:21:48.758198] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:02.785 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.785 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:15:02.785 18:21:49 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:02.785 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.785 18:21:49 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:02.785 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:02.785 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 87894 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 87894 ']' 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 87894 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87894 00:15:03.044 killing process with pid 87894 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87894' 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 87894 00:15:03.044 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 87894 00:15:03.044 [2024-07-11 18:21:49.428230] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:03.302 [2024-07-11 18:21:49.464280] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:03.302 [2024-07-11 18:21:49.468147] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:03.302 [2024-07-11 18:21:49.477177] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:03.302 [2024-07-11 18:21:49.477256] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:03.302 [2024-07-11 18:21:49.477272] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:03.302 [2024-07-11 18:21:49.477303] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:03.302 [2024-07-11 18:21:49.477489] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:03.302 18:21:49 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:03.302 ************************************ 00:15:03.302 END TEST test_save_ublk_config 00:15:03.302 ************************************ 00:15:03.302 00:15:03.302 real 0m3.350s 00:15:03.302 user 0m2.893s 00:15:03.302 sys 0m1.340s 00:15:03.302 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.302 18:21:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:03.560 18:21:49 ublk -- common/autotest_common.sh@1142 -- # return 0 00:15:03.560 18:21:49 ublk -- ublk/ublk.sh@139 -- # spdk_pid=87950 00:15:03.560 18:21:49 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:03.560 18:21:49 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:03.560 18:21:49 ublk -- ublk/ublk.sh@141 -- # waitforlisten 87950 00:15:03.560 18:21:49 ublk -- common/autotest_common.sh@829 -- # '[' -z 87950 ']' 00:15:03.560 18:21:49 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.560 18:21:49 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.560 18:21:49 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.560 18:21:49 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.560 18:21:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:03.560 [2024-07-11 18:21:49.834774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:03.560 [2024-07-11 18:21:49.835238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87950 ] 00:15:03.818 [2024-07-11 18:21:49.983367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:03.818 [2024-07-11 18:21:50.020295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.818 [2024-07-11 18:21:50.020344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.385 18:21:50 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.385 18:21:50 ublk -- common/autotest_common.sh@862 -- # return 0 00:15:04.385 18:21:50 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:04.385 18:21:50 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:04.385 18:21:50 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.385 18:21:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.385 ************************************ 00:15:04.385 START TEST test_create_ublk 00:15:04.385 ************************************ 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:15:04.385 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.385 [2024-07-11 18:21:50.727172] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:04.385 [2024-07-11 18:21:50.728380] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.385 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:04.385 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.385 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:04.385 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.385 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.385 [2024-07-11 18:21:50.781300] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:04.385 [2024-07-11 18:21:50.781828] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:04.385 [2024-07-11 18:21:50.781858] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:04.385 [2024-07-11 18:21:50.781870] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:04.385 [2024-07-11 18:21:50.789433] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:04.385 [2024-07-11 18:21:50.789458] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:04.385 [2024-07-11 18:21:50.796181] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:04.643 [2024-07-11 18:21:50.815213] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:04.643 [2024-07-11 18:21:50.830154] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:04.643 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:04.643 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.643 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:04.643 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:04.643 { 00:15:04.643 "ublk_device": "/dev/ublkb0", 00:15:04.643 "id": 0, 00:15:04.643 "queue_depth": 512, 00:15:04.643 "num_queues": 4, 00:15:04.643 "bdev_name": "Malloc0" 00:15:04.643 } 00:15:04.643 ]' 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:04.643 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:04.643 18:21:51 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:04.643 18:21:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:04.902 18:21:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:04.902 18:21:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:04.902 18:21:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:04.902 18:21:51 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:04.902 18:21:51 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:04.902 fio: verification read phase will never start because write phase uses all of runtime 00:15:04.902 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:04.902 fio-3.35 00:15:04.902 Starting 1 process 00:15:17.098 00:15:17.098 fio_test: (groupid=0, jobs=1): err= 0: pid=87989: Thu Jul 11 18:22:01 2024 00:15:17.098 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(449MiB/10001msec); 0 zone resets 00:15:17.098 clat (usec): min=51, max=3966, avg=85.71, stdev=129.06 00:15:17.098 lat (usec): min=52, max=3966, avg=86.38, stdev=129.07 00:15:17.098 clat percentiles (usec): 00:15:17.098 | 1.00th=[ 59], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 71], 00:15:17.098 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 76], 00:15:17.098 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 99], 95.00th=[ 108], 00:15:17.098 | 99.00th=[ 125], 99.50th=[ 137], 99.90th=[ 2769], 99.95th=[ 3163], 00:15:17.098 | 99.99th=[ 3720] 00:15:17.098 bw ( KiB/s): min=44632, max=49712, per=100.00%, avg=46018.95, stdev=1073.83, samples=19 00:15:17.098 iops : min=11158, max=12428, avg=11504.74, stdev=268.46, samples=19 00:15:17.098 lat (usec) : 100=90.55%, 250=9.10%, 500=0.01%, 750=0.03%, 1000=0.02% 00:15:17.098 lat (msec) : 2=0.11%, 4=0.17% 00:15:17.098 cpu : usr=3.56%, sys=7.76%, ctx=114926, majf=0, minf=797 00:15:17.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:17.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.098 issued rwts: total=0,114920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:17.098 00:15:17.098 Run status group 0 (all jobs): 00:15:17.098 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=449MiB (471MB), run=10001-10001msec 00:15:17.098 00:15:17.098 Disk stats (read/write): 00:15:17.098 ublkb0: ios=0/113742, merge=0/0, ticks=0/8870, in_queue=8871, util=99.05% 00:15:17.098 18:22:01 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 [2024-07-11 18:22:01.359398] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:17.098 [2024-07-11 18:22:01.398728] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:17.098 [2024-07-11 18:22:01.400147] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:17.098 [2024-07-11 18:22:01.406147] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:17.098 [2024-07-11 18:22:01.406547] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:17.098 [2024-07-11 18:22:01.406610] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 [2024-07-11 18:22:01.414271] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:17.098 request: 00:15:17.098 { 00:15:17.098 "ublk_id": 0, 00:15:17.098 "method": "ublk_stop_disk", 00:15:17.098 "req_id": 1 00:15:17.098 } 00:15:17.098 Got JSON-RPC error response 00:15:17.098 response: 00:15:17.098 { 00:15:17.098 "code": -19, 00:15:17.098 "message": "No such device" 00:15:17.098 } 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:17.098 18:22:01 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 [2024-07-11 18:22:01.429369] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:17.098 [2024-07-11 18:22:01.431354] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:17.098 [2024-07-11 18:22:01.431422] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:17.098 18:22:01 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:17.098 00:15:17.098 real 0m10.868s 00:15:17.098 user 0m0.812s 00:15:17.098 sys 0m0.865s 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.098 ************************************ 00:15:17.098 END TEST test_create_ublk 00:15:17.098 18:22:01 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 ************************************ 00:15:17.098 18:22:01 ublk -- common/autotest_common.sh@1142 -- # return 0 00:15:17.098 18:22:01 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:17.098 18:22:01 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:17.098 18:22:01 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.098 18:22:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 ************************************ 00:15:17.098 START TEST test_create_multi_ublk 00:15:17.098 ************************************ 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 [2024-07-11 18:22:01.644162] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:17.098 [2024-07-11 18:22:01.645155] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.098 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.098 [2024-07-11 18:22:01.700370] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:17.098 [2024-07-11 18:22:01.700942] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:17.098 [2024-07-11 18:22:01.700964] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:17.098 [2024-07-11 18:22:01.700977] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:17.098 [2024-07-11 18:22:01.704475] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:17.098 [2024-07-11 18:22:01.704521] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:17.098 [2024-07-11 18:22:01.715164] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:17.098 [2024-07-11 18:22:01.715887] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:17.099 [2024-07-11 18:22:01.727231] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 [2024-07-11 18:22:01.790340] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:17.099 [2024-07-11 18:22:01.790812] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:17.099 [2024-07-11 18:22:01.790837] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:17.099 [2024-07-11 18:22:01.790848] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:17.099 [2024-07-11 18:22:01.798351] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:17.099 [2024-07-11 18:22:01.798377] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:17.099 [2024-07-11 18:22:01.808169] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:17.099 [2024-07-11 18:22:01.808922] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:17.099 [2024-07-11 18:22:01.820209] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 [2024-07-11 18:22:01.884332] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:17.099 [2024-07-11 18:22:01.884881] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:17.099 [2024-07-11 18:22:01.884905] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:17.099 [2024-07-11 18:22:01.884917] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:17.099 [2024-07-11 18:22:01.892133] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:17.099 [2024-07-11 18:22:01.892165] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:17.099 [2024-07-11 18:22:01.906164] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:17.099 [2024-07-11 18:22:01.906973] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:17.099 [2024-07-11 18:22:01.915167] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.099 18:22:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 [2024-07-11 18:22:01.971320] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:17.099 [2024-07-11 18:22:01.971829] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:17.099 [2024-07-11 18:22:01.971855] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:17.099 [2024-07-11 18:22:01.971866] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:17.099 [2024-07-11 18:22:01.982173] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:17.099 [2024-07-11 18:22:01.982220] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:17.099 [2024-07-11 18:22:01.989183] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:17.099 [2024-07-11 18:22:01.989987] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:17.099 [2024-07-11 18:22:02.005209] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:17.099 { 00:15:17.099 "ublk_device": "/dev/ublkb0", 00:15:17.099 "id": 0, 00:15:17.099 "queue_depth": 512, 00:15:17.099 "num_queues": 4, 00:15:17.099 "bdev_name": "Malloc0" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "ublk_device": "/dev/ublkb1", 00:15:17.099 "id": 1, 00:15:17.099 "queue_depth": 512, 00:15:17.099 "num_queues": 4, 00:15:17.099 "bdev_name": "Malloc1" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "ublk_device": "/dev/ublkb2", 00:15:17.099 "id": 2, 00:15:17.099 "queue_depth": 512, 00:15:17.099 "num_queues": 4, 00:15:17.099 "bdev_name": "Malloc2" 00:15:17.099 }, 00:15:17.099 { 00:15:17.099 "ublk_device": "/dev/ublkb3", 00:15:17.099 "id": 3, 00:15:17.099 "queue_depth": 512, 00:15:17.099 "num_queues": 4, 00:15:17.099 "bdev_name": "Malloc3" 00:15:17.099 } 00:15:17.099 ]' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:17.099 18:22:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:17.099 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:17.099 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:17.099 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.100 [2024-07-11 18:22:03.066334] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:17.100 [2024-07-11 18:22:03.101539] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:17.100 [2024-07-11 18:22:03.103215] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:17.100 [2024-07-11 18:22:03.108168] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:17.100 [2024-07-11 18:22:03.108563] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:17.100 [2024-07-11 18:22:03.108591] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.100 [2024-07-11 18:22:03.116364] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:17.100 [2024-07-11 18:22:03.152524] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:17.100 [2024-07-11 18:22:03.155513] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:17.100 [2024-07-11 18:22:03.161126] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:17.100 [2024-07-11 18:22:03.161497] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:17.100 [2024-07-11 18:22:03.161523] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.100 [2024-07-11 18:22:03.165372] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:17.100 [2024-07-11 18:22:03.194673] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:17.100 [2024-07-11 18:22:03.200113] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:17.100 [2024-07-11 18:22:03.207283] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:17.100 [2024-07-11 18:22:03.207658] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:17.100 [2024-07-11 18:22:03.207683] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.100 [2024-07-11 18:22:03.223292] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:17.100 [2024-07-11 18:22:03.259660] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:17.100 [2024-07-11 18:22:03.264511] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:17.100 [2024-07-11 18:22:03.271179] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:17.100 [2024-07-11 18:22:03.271573] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:17.100 [2024-07-11 18:22:03.271599] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.100 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:17.358 [2024-07-11 18:22:03.531300] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:17.358 [2024-07-11 18:22:03.532695] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:17.358 [2024-07-11 18:22:03.532751] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:17.358 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:17.616 ************************************ 00:15:17.616 END TEST test_create_multi_ublk 00:15:17.616 ************************************ 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:17.616 00:15:17.616 real 0m2.193s 00:15:17.616 user 0m1.293s 00:15:17.616 sys 0m0.172s 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.616 18:22:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@1142 -- # return 0 00:15:17.616 18:22:03 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:17.616 18:22:03 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:17.616 18:22:03 ublk -- ublk/ublk.sh@130 -- # killprocess 87950 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@948 -- # '[' -z 87950 ']' 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@952 -- # kill -0 87950 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@953 -- # uname 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87950 00:15:17.616 killing process with pid 87950 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87950' 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@967 -- # kill 87950 00:15:17.616 18:22:03 ublk -- common/autotest_common.sh@972 -- # wait 87950 00:15:17.616 [2024-07-11 18:22:03.992462] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:17.616 [2024-07-11 18:22:03.992526] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:17.874 00:15:17.874 real 0m17.966s 00:15:17.874 user 0m28.691s 00:15:17.874 sys 0m7.646s 00:15:17.874 18:22:04 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.874 18:22:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.874 ************************************ 00:15:17.874 END TEST ublk 00:15:17.874 ************************************ 00:15:17.874 18:22:04 -- common/autotest_common.sh@1142 -- # return 0 00:15:17.874 18:22:04 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:17.874 18:22:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:17.874 18:22:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.874 18:22:04 -- common/autotest_common.sh@10 -- # set +x 00:15:17.874 ************************************ 00:15:17.874 START TEST ublk_recovery 00:15:17.874 ************************************ 00:15:17.874 18:22:04 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:18.131 * Looking for test storage... 00:15:18.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:18.131 18:22:04 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:18.131 18:22:04 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:18.131 18:22:04 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:18.131 18:22:04 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=88284 00:15:18.131 18:22:04 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:18.131 18:22:04 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:18.131 18:22:04 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 88284 00:15:18.131 18:22:04 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 88284 ']' 00:15:18.131 18:22:04 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.131 18:22:04 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.131 18:22:04 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.131 18:22:04 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.131 18:22:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:18.131 [2024-07-11 18:22:04.432636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:18.131 [2024-07-11 18:22:04.432827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88284 ] 00:15:18.389 [2024-07-11 18:22:04.583096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:18.389 [2024-07-11 18:22:04.622582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.389 [2024-07-11 18:22:04.622609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.954 18:22:05 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:18.954 18:22:05 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:15:18.954 18:22:05 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:18.954 18:22:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.954 18:22:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.211 [2024-07-11 18:22:05.376204] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:19.211 [2024-07-11 18:22:05.377382] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:19.211 18:22:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.211 18:22:05 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:19.211 18:22:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.212 18:22:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.212 malloc0 00:15:19.212 18:22:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.212 18:22:05 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:19.212 18:22:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.212 18:22:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:19.212 [2024-07-11 18:22:05.407577] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:19.212 [2024-07-11 18:22:05.407714] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:19.212 [2024-07-11 18:22:05.407732] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:19.212 [2024-07-11 18:22:05.407742] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:19.212 [2024-07-11 18:22:05.415309] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:19.212 [2024-07-11 18:22:05.415337] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:19.212 [2024-07-11 18:22:05.422123] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:19.212 [2024-07-11 18:22:05.422302] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:19.212 [2024-07-11 18:22:05.437177] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:19.212 1 00:15:19.212 18:22:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.212 18:22:05 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:20.143 18:22:06 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=88317 00:15:20.143 18:22:06 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:20.143 18:22:06 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:20.143 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:20.143 fio-3.35 00:15:20.143 Starting 1 process 00:15:25.403 18:22:11 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 88284 00:15:25.403 18:22:11 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:30.660 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 88284 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:30.660 18:22:16 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=88428 00:15:30.660 18:22:16 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:30.660 18:22:16 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:30.660 18:22:16 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 88428 00:15:30.660 18:22:16 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 88428 ']' 00:15:30.660 18:22:16 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.660 18:22:16 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.660 18:22:16 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.660 18:22:16 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.660 18:22:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.660 [2024-07-11 18:22:16.566945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:30.660 [2024-07-11 18:22:16.567221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88428 ] 00:15:30.660 [2024-07-11 18:22:16.719628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:30.660 [2024-07-11 18:22:16.765252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.660 [2024-07-11 18:22:16.765307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:15:31.226 18:22:17 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.226 [2024-07-11 18:22:17.518183] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:31.226 [2024-07-11 18:22:17.519433] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.226 18:22:17 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.226 malloc0 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.226 18:22:17 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:31.226 [2024-07-11 18:22:17.552381] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:31.226 [2024-07-11 18:22:17.552478] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:31.226 [2024-07-11 18:22:17.552507] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:31.226 1 00:15:31.226 18:22:17 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.226 18:22:17 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 88317 00:15:31.226 [2024-07-11 18:22:17.563230] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:31.226 [2024-07-11 18:22:17.563306] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:31.226 [2024-07-11 18:22:17.563432] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:57.794 [2024-07-11 18:22:41.609185] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:57.794 [2024-07-11 18:22:41.615527] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:57.794 [2024-07-11 18:22:41.621379] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:57.794 [2024-07-11 18:22:41.621409] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:24.331 00:16:24.331 fio_test: (groupid=0, jobs=1): err= 0: pid=88320: Thu Jul 11 18:23:06 2024 00:16:24.331 read: IOPS=10.0k, BW=39.2MiB/s (41.2MB/s)(2355MiB/60003msec) 00:16:24.331 slat (usec): min=2, max=183, avg= 6.38, stdev= 3.02 00:16:24.331 clat (usec): min=1679, max=30178k, avg=6262.19, stdev=308421.64 00:16:24.331 lat (usec): min=1684, max=30178k, avg=6268.57, stdev=308421.64 00:16:24.331 clat percentiles (msec): 00:16:24.331 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:16:24.331 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:16:24.331 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:16:24.331 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 13], 00:16:24.331 | 99.99th=[17113] 00:16:24.331 bw ( KiB/s): min= 4568, max=86648, per=100.00%, avg=79141.98, stdev=13201.77, samples=60 00:16:24.331 iops : min= 1142, max=21662, avg=19785.48, stdev=3300.44, samples=60 00:16:24.331 write: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(2352MiB/60003msec); 0 zone resets 00:16:24.331 slat (usec): min=2, max=177, avg= 6.44, stdev= 3.10 00:16:24.331 clat (usec): min=1582, max=30179k, avg=6469.22, stdev=313457.21 00:16:24.331 lat (usec): min=1602, max=30179k, avg=6475.66, stdev=313457.20 00:16:24.331 clat percentiles (msec): 00:16:24.331 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:16:24.331 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:16:24.331 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:16:24.331 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 13], 00:16:24.331 | 99.99th=[17113] 00:16:24.331 bw ( KiB/s): min= 4776, max=85088, per=100.00%, avg=79059.03, stdev=13167.72, samples=60 00:16:24.331 iops : min= 1194, max=21272, avg=19764.72, stdev=3291.92, samples=60 00:16:24.331 lat (msec) : 2=0.06%, 4=94.22%, 10=5.63%, 20=0.07%, >=2000=0.01% 00:16:24.331 cpu : usr=5.73%, sys=11.86%, ctx=37077, majf=0, minf=13 00:16:24.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:24.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.331 issued rwts: total=602920,602232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.331 00:16:24.331 Run status group 0 (all jobs): 00:16:24.331 READ: bw=39.2MiB/s (41.2MB/s), 39.2MiB/s-39.2MiB/s (41.2MB/s-41.2MB/s), io=2355MiB (2470MB), run=60003-60003msec 00:16:24.331 WRITE: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=2352MiB (2467MB), run=60003-60003msec 00:16:24.331 00:16:24.331 Disk stats (read/write): 00:16:24.331 ublkb1: ios=600597/599971, merge=0/0, ticks=3712891/3766486, in_queue=7479377, util=99.94% 00:16:24.331 18:23:06 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:24.331 18:23:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.331 18:23:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.331 [2024-07-11 18:23:06.705427] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:24.331 [2024-07-11 18:23:06.747271] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:24.331 [2024-07-11 18:23:06.747599] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:24.331 [2024-07-11 18:23:06.755172] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:24.331 [2024-07-11 18:23:06.755302] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:24.331 [2024-07-11 18:23:06.755316] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:24.331 18:23:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.331 18:23:06 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:24.331 18:23:06 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.331 18:23:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.331 [2024-07-11 18:23:06.770305] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:24.331 [2024-07-11 18:23:06.771677] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:24.331 [2024-07-11 18:23:06.771739] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.332 18:23:06 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:24.332 18:23:06 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:24.332 18:23:06 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 88428 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 88428 ']' 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 88428 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88428 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:24.332 killing process with pid 88428 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88428' 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@967 -- # kill 88428 00:16:24.332 18:23:06 ublk_recovery -- common/autotest_common.sh@972 -- # wait 88428 00:16:24.332 [2024-07-11 18:23:06.909239] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:24.332 [2024-07-11 18:23:06.909333] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:24.332 00:16:24.332 real 1m2.890s 00:16:24.332 user 1m47.142s 00:16:24.332 sys 0m18.956s 00:16:24.332 18:23:07 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.332 18:23:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:24.332 ************************************ 00:16:24.332 END TEST ublk_recovery 00:16:24.332 ************************************ 00:16:24.332 18:23:07 -- common/autotest_common.sh@1142 -- # return 0 00:16:24.332 18:23:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:24.332 18:23:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.332 18:23:07 -- common/autotest_common.sh@10 -- # set +x 00:16:24.332 18:23:07 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:16:24.332 18:23:07 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:24.332 18:23:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:24.332 18:23:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.332 18:23:07 -- common/autotest_common.sh@10 -- # set +x 00:16:24.332 ************************************ 00:16:24.332 START TEST ftl 00:16:24.332 ************************************ 00:16:24.332 18:23:07 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:24.332 * Looking for test storage... 00:16:24.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:24.332 18:23:07 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:24.332 18:23:07 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.332 18:23:07 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.332 18:23:07 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:24.332 18:23:07 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:24.332 18:23:07 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.332 18:23:07 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:24.332 18:23:07 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:24.332 18:23:07 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.332 18:23:07 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.332 18:23:07 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:24.332 18:23:07 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:24.332 18:23:07 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.332 18:23:07 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.332 18:23:07 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:24.332 18:23:07 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:24.332 18:23:07 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.332 18:23:07 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.332 18:23:07 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:24.332 18:23:07 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:24.332 18:23:07 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.332 18:23:07 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.332 18:23:07 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.332 18:23:07 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.332 18:23:07 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:24.332 18:23:07 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:24.332 18:23:07 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.332 18:23:07 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:24.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.332 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.332 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.332 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.332 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=89192 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@38 -- # waitforlisten 89192 00:16:24.332 18:23:07 ftl -- common/autotest_common.sh@829 -- # '[' -z 89192 ']' 00:16:24.332 18:23:07 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.332 18:23:07 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:24.332 18:23:07 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.332 18:23:07 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.332 18:23:07 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.332 18:23:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:24.332 [2024-07-11 18:23:07.963869] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:24.332 [2024-07-11 18:23:07.964105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89192 ] 00:16:24.332 [2024-07-11 18:23:08.116200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.332 [2024-07-11 18:23:08.160387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.332 18:23:08 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.332 18:23:08 ftl -- common/autotest_common.sh@862 -- # return 0 00:16:24.332 18:23:08 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:24.332 18:23:09 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:24.332 18:23:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:24.332 18:23:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@50 -- # break 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@63 -- # break 00:16:24.332 18:23:10 ftl -- ftl/ftl.sh@66 -- # killprocess 89192 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@948 -- # '[' -z 89192 ']' 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@952 -- # kill -0 89192 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@953 -- # uname 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89192 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:24.332 killing process with pid 89192 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89192' 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@967 -- # kill 89192 00:16:24.332 18:23:10 ftl -- common/autotest_common.sh@972 -- # wait 89192 00:16:24.591 18:23:10 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:24.591 18:23:10 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:24.591 18:23:10 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:24.591 18:23:10 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.591 18:23:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:24.591 ************************************ 00:16:24.591 START TEST ftl_fio_basic 00:16:24.591 ************************************ 00:16:24.591 18:23:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:24.591 * Looking for test storage... 00:16:24.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.591 18:23:10 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:24.591 18:23:10 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:24.591 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=89311 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 89311 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 89311 ']' 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.850 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:24.850 [2024-07-11 18:23:11.129352] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:24.850 [2024-07-11 18:23:11.129551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89311 ] 00:16:25.109 [2024-07-11 18:23:11.275620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.109 [2024-07-11 18:23:11.316570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.109 [2024-07-11 18:23:11.316719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.109 [2024-07-11 18:23:11.316790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:25.677 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:26.246 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:26.506 { 00:16:26.506 "name": "nvme0n1", 00:16:26.506 "aliases": [ 00:16:26.506 "21c3d6d2-9e8a-495a-b957-b73b3006be65" 00:16:26.506 ], 00:16:26.506 "product_name": "NVMe disk", 00:16:26.506 "block_size": 4096, 00:16:26.506 "num_blocks": 1310720, 00:16:26.506 "uuid": "21c3d6d2-9e8a-495a-b957-b73b3006be65", 00:16:26.506 "assigned_rate_limits": { 00:16:26.506 "rw_ios_per_sec": 0, 00:16:26.506 "rw_mbytes_per_sec": 0, 00:16:26.506 "r_mbytes_per_sec": 0, 00:16:26.506 "w_mbytes_per_sec": 0 00:16:26.506 }, 00:16:26.506 "claimed": false, 00:16:26.506 "zoned": false, 00:16:26.506 "supported_io_types": { 00:16:26.506 "read": true, 00:16:26.506 "write": true, 00:16:26.506 "unmap": true, 00:16:26.506 "flush": true, 00:16:26.506 "reset": true, 00:16:26.506 "nvme_admin": true, 00:16:26.506 "nvme_io": true, 00:16:26.506 "nvme_io_md": false, 00:16:26.506 "write_zeroes": true, 00:16:26.506 "zcopy": false, 00:16:26.506 "get_zone_info": false, 00:16:26.506 "zone_management": false, 00:16:26.506 "zone_append": false, 00:16:26.506 "compare": true, 00:16:26.506 "compare_and_write": false, 00:16:26.506 "abort": true, 00:16:26.506 "seek_hole": false, 00:16:26.506 "seek_data": false, 00:16:26.506 "copy": true, 00:16:26.506 "nvme_iov_md": false 00:16:26.506 }, 00:16:26.506 "driver_specific": { 00:16:26.506 "nvme": [ 00:16:26.506 { 00:16:26.506 "pci_address": "0000:00:11.0", 00:16:26.506 "trid": { 00:16:26.506 "trtype": "PCIe", 00:16:26.506 "traddr": "0000:00:11.0" 00:16:26.506 }, 00:16:26.506 "ctrlr_data": { 00:16:26.506 "cntlid": 0, 00:16:26.506 "vendor_id": "0x1b36", 00:16:26.506 "model_number": "QEMU NVMe Ctrl", 00:16:26.506 "serial_number": "12341", 00:16:26.506 "firmware_revision": "8.0.0", 00:16:26.506 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:26.506 "oacs": { 00:16:26.506 "security": 0, 00:16:26.506 "format": 1, 00:16:26.506 "firmware": 0, 00:16:26.506 "ns_manage": 1 00:16:26.506 }, 00:16:26.506 "multi_ctrlr": false, 00:16:26.506 "ana_reporting": false 00:16:26.506 }, 00:16:26.506 "vs": { 00:16:26.506 "nvme_version": "1.4" 00:16:26.506 }, 00:16:26.506 "ns_data": { 00:16:26.506 "id": 1, 00:16:26.506 "can_share": false 00:16:26.506 } 00:16:26.506 } 00:16:26.506 ], 00:16:26.506 "mp_policy": "active_passive" 00:16:26.506 } 00:16:26.506 } 00:16:26.506 ]' 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:26.506 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:26.764 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:26.764 18:23:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:27.022 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ea45e81d-7a36-4794-87bb-74268bf41b3c 00:16:27.022 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ea45e81d-7a36-4794-87bb-74268bf41b3c 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:27.280 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:27.538 { 00:16:27.538 "name": "c2c90dad-3a4c-4dab-9072-27e97af54815", 00:16:27.538 "aliases": [ 00:16:27.538 "lvs/nvme0n1p0" 00:16:27.538 ], 00:16:27.538 "product_name": "Logical Volume", 00:16:27.538 "block_size": 4096, 00:16:27.538 "num_blocks": 26476544, 00:16:27.538 "uuid": "c2c90dad-3a4c-4dab-9072-27e97af54815", 00:16:27.538 "assigned_rate_limits": { 00:16:27.538 "rw_ios_per_sec": 0, 00:16:27.538 "rw_mbytes_per_sec": 0, 00:16:27.538 "r_mbytes_per_sec": 0, 00:16:27.538 "w_mbytes_per_sec": 0 00:16:27.538 }, 00:16:27.538 "claimed": false, 00:16:27.538 "zoned": false, 00:16:27.538 "supported_io_types": { 00:16:27.538 "read": true, 00:16:27.538 "write": true, 00:16:27.538 "unmap": true, 00:16:27.538 "flush": false, 00:16:27.538 "reset": true, 00:16:27.538 "nvme_admin": false, 00:16:27.538 "nvme_io": false, 00:16:27.538 "nvme_io_md": false, 00:16:27.538 "write_zeroes": true, 00:16:27.538 "zcopy": false, 00:16:27.538 "get_zone_info": false, 00:16:27.538 "zone_management": false, 00:16:27.538 "zone_append": false, 00:16:27.538 "compare": false, 00:16:27.538 "compare_and_write": false, 00:16:27.538 "abort": false, 00:16:27.538 "seek_hole": true, 00:16:27.538 "seek_data": true, 00:16:27.538 "copy": false, 00:16:27.538 "nvme_iov_md": false 00:16:27.538 }, 00:16:27.538 "driver_specific": { 00:16:27.538 "lvol": { 00:16:27.538 "lvol_store_uuid": "ea45e81d-7a36-4794-87bb-74268bf41b3c", 00:16:27.538 "base_bdev": "nvme0n1", 00:16:27.538 "thin_provision": true, 00:16:27.538 "num_allocated_clusters": 0, 00:16:27.538 "snapshot": false, 00:16:27.538 "clone": false, 00:16:27.538 "esnap_clone": false 00:16:27.538 } 00:16:27.538 } 00:16:27.538 } 00:16:27.538 ]' 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:27.538 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:27.796 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:27.796 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:27.796 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.796 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:27.796 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:27.796 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:28.054 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:28.054 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:28.054 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:28.054 { 00:16:28.054 "name": "c2c90dad-3a4c-4dab-9072-27e97af54815", 00:16:28.054 "aliases": [ 00:16:28.054 "lvs/nvme0n1p0" 00:16:28.054 ], 00:16:28.054 "product_name": "Logical Volume", 00:16:28.054 "block_size": 4096, 00:16:28.054 "num_blocks": 26476544, 00:16:28.054 "uuid": "c2c90dad-3a4c-4dab-9072-27e97af54815", 00:16:28.054 "assigned_rate_limits": { 00:16:28.054 "rw_ios_per_sec": 0, 00:16:28.054 "rw_mbytes_per_sec": 0, 00:16:28.054 "r_mbytes_per_sec": 0, 00:16:28.054 "w_mbytes_per_sec": 0 00:16:28.054 }, 00:16:28.054 "claimed": false, 00:16:28.054 "zoned": false, 00:16:28.054 "supported_io_types": { 00:16:28.054 "read": true, 00:16:28.054 "write": true, 00:16:28.054 "unmap": true, 00:16:28.054 "flush": false, 00:16:28.054 "reset": true, 00:16:28.054 "nvme_admin": false, 00:16:28.054 "nvme_io": false, 00:16:28.054 "nvme_io_md": false, 00:16:28.054 "write_zeroes": true, 00:16:28.054 "zcopy": false, 00:16:28.054 "get_zone_info": false, 00:16:28.054 "zone_management": false, 00:16:28.054 "zone_append": false, 00:16:28.054 "compare": false, 00:16:28.054 "compare_and_write": false, 00:16:28.054 "abort": false, 00:16:28.054 "seek_hole": true, 00:16:28.054 "seek_data": true, 00:16:28.054 "copy": false, 00:16:28.054 "nvme_iov_md": false 00:16:28.054 }, 00:16:28.054 "driver_specific": { 00:16:28.054 "lvol": { 00:16:28.054 "lvol_store_uuid": "ea45e81d-7a36-4794-87bb-74268bf41b3c", 00:16:28.054 "base_bdev": "nvme0n1", 00:16:28.054 "thin_provision": true, 00:16:28.054 "num_allocated_clusters": 0, 00:16:28.054 "snapshot": false, 00:16:28.054 "clone": false, 00:16:28.054 "esnap_clone": false 00:16:28.054 } 00:16:28.054 } 00:16:28.054 } 00:16:28.054 ]' 00:16:28.054 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:28.313 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:28.313 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:28.313 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:28.313 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:28.313 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:28.313 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:28.313 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:28.571 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:28.571 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c2c90dad-3a4c-4dab-9072-27e97af54815 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:28.848 { 00:16:28.848 "name": "c2c90dad-3a4c-4dab-9072-27e97af54815", 00:16:28.848 "aliases": [ 00:16:28.848 "lvs/nvme0n1p0" 00:16:28.848 ], 00:16:28.848 "product_name": "Logical Volume", 00:16:28.848 "block_size": 4096, 00:16:28.848 "num_blocks": 26476544, 00:16:28.848 "uuid": "c2c90dad-3a4c-4dab-9072-27e97af54815", 00:16:28.848 "assigned_rate_limits": { 00:16:28.848 "rw_ios_per_sec": 0, 00:16:28.848 "rw_mbytes_per_sec": 0, 00:16:28.848 "r_mbytes_per_sec": 0, 00:16:28.848 "w_mbytes_per_sec": 0 00:16:28.848 }, 00:16:28.848 "claimed": false, 00:16:28.848 "zoned": false, 00:16:28.848 "supported_io_types": { 00:16:28.848 "read": true, 00:16:28.848 "write": true, 00:16:28.848 "unmap": true, 00:16:28.848 "flush": false, 00:16:28.848 "reset": true, 00:16:28.848 "nvme_admin": false, 00:16:28.848 "nvme_io": false, 00:16:28.848 "nvme_io_md": false, 00:16:28.848 "write_zeroes": true, 00:16:28.848 "zcopy": false, 00:16:28.848 "get_zone_info": false, 00:16:28.848 "zone_management": false, 00:16:28.848 "zone_append": false, 00:16:28.848 "compare": false, 00:16:28.848 "compare_and_write": false, 00:16:28.848 "abort": false, 00:16:28.848 "seek_hole": true, 00:16:28.848 "seek_data": true, 00:16:28.848 "copy": false, 00:16:28.848 "nvme_iov_md": false 00:16:28.848 }, 00:16:28.848 "driver_specific": { 00:16:28.848 "lvol": { 00:16:28.848 "lvol_store_uuid": "ea45e81d-7a36-4794-87bb-74268bf41b3c", 00:16:28.848 "base_bdev": "nvme0n1", 00:16:28.848 "thin_provision": true, 00:16:28.848 "num_allocated_clusters": 0, 00:16:28.848 "snapshot": false, 00:16:28.848 "clone": false, 00:16:28.848 "esnap_clone": false 00:16:28.848 } 00:16:28.848 } 00:16:28.848 } 00:16:28.848 ]' 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:28.848 18:23:15 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c2c90dad-3a4c-4dab-9072-27e97af54815 -c nvc0n1p0 --l2p_dram_limit 60 00:16:29.125 [2024-07-11 18:23:15.377446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.377553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:29.125 [2024-07-11 18:23:15.377594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:29.125 [2024-07-11 18:23:15.377622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.377747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.377770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.125 [2024-07-11 18:23:15.377786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:29.125 [2024-07-11 18:23:15.377799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.377861] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:29.125 [2024-07-11 18:23:15.378238] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:29.125 [2024-07-11 18:23:15.378282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.378297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.125 [2024-07-11 18:23:15.378312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:16:29.125 [2024-07-11 18:23:15.378325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.378519] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 107398ee-71ab-42d9-b790-5254843a5a84 00:16:29.125 [2024-07-11 18:23:15.379636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.379691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:29.125 [2024-07-11 18:23:15.379708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:29.125 [2024-07-11 18:23:15.379722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.384306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.384361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.125 [2024-07-11 18:23:15.384395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.504 ms 00:16:29.125 [2024-07-11 18:23:15.384411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.384568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.384611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.125 [2024-07-11 18:23:15.384625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:29.125 [2024-07-11 18:23:15.384642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.384738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.384762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:29.125 [2024-07-11 18:23:15.384793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:29.125 [2024-07-11 18:23:15.384812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.384864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:29.125 [2024-07-11 18:23:15.386441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.386481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.125 [2024-07-11 18:23:15.386503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:16:29.125 [2024-07-11 18:23:15.386516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.125 [2024-07-11 18:23:15.386580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.125 [2024-07-11 18:23:15.386598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:29.125 [2024-07-11 18:23:15.386613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:29.125 [2024-07-11 18:23:15.386628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.126 [2024-07-11 18:23:15.386698] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:29.126 [2024-07-11 18:23:15.386894] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:29.126 [2024-07-11 18:23:15.386929] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:29.126 [2024-07-11 18:23:15.386950] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:29.126 [2024-07-11 18:23:15.386972] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:29.126 [2024-07-11 18:23:15.386986] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387000] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:29.126 [2024-07-11 18:23:15.387011] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:29.126 [2024-07-11 18:23:15.387025] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:29.126 [2024-07-11 18:23:15.387052] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:29.126 [2024-07-11 18:23:15.387069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.126 [2024-07-11 18:23:15.387102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:29.126 [2024-07-11 18:23:15.387121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:16:29.126 [2024-07-11 18:23:15.387134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.126 [2024-07-11 18:23:15.387244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.126 [2024-07-11 18:23:15.387262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:29.126 [2024-07-11 18:23:15.387280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:29.126 [2024-07-11 18:23:15.387292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.126 [2024-07-11 18:23:15.387420] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:29.126 [2024-07-11 18:23:15.387455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:29.126 [2024-07-11 18:23:15.387474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387500] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:29.126 [2024-07-11 18:23:15.387530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:29.126 [2024-07-11 18:23:15.387567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387577] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.126 [2024-07-11 18:23:15.387590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:29.126 [2024-07-11 18:23:15.387601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:29.126 [2024-07-11 18:23:15.387613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.126 [2024-07-11 18:23:15.387624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:29.126 [2024-07-11 18:23:15.387640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:29.126 [2024-07-11 18:23:15.387650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:29.126 [2024-07-11 18:23:15.387673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:29.126 [2024-07-11 18:23:15.387708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387719] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:29.126 [2024-07-11 18:23:15.387742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:29.126 [2024-07-11 18:23:15.387777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:29.126 [2024-07-11 18:23:15.387812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:29.126 [2024-07-11 18:23:15.387837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:29.126 [2024-07-11 18:23:15.387850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387860] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.126 [2024-07-11 18:23:15.387872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:29.126 [2024-07-11 18:23:15.387883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:29.126 [2024-07-11 18:23:15.387897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.126 [2024-07-11 18:23:15.387907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:29.126 [2024-07-11 18:23:15.387920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:29.126 [2024-07-11 18:23:15.387930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:29.126 [2024-07-11 18:23:15.387953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:29.126 [2024-07-11 18:23:15.387966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.126 [2024-07-11 18:23:15.387976] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:29.126 [2024-07-11 18:23:15.387990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:29.126 [2024-07-11 18:23:15.388020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.126 [2024-07-11 18:23:15.388037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.126 [2024-07-11 18:23:15.388048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:29.126 [2024-07-11 18:23:15.388061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:29.126 [2024-07-11 18:23:15.388072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:29.126 [2024-07-11 18:23:15.388101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:29.126 [2024-07-11 18:23:15.388115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:29.126 [2024-07-11 18:23:15.388128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:29.126 [2024-07-11 18:23:15.388149] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:29.126 [2024-07-11 18:23:15.388168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.126 [2024-07-11 18:23:15.388196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:29.126 [2024-07-11 18:23:15.388211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:29.126 [2024-07-11 18:23:15.388223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:29.126 [2024-07-11 18:23:15.388237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:29.126 [2024-07-11 18:23:15.388248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:29.126 [2024-07-11 18:23:15.388262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:29.126 [2024-07-11 18:23:15.388273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:29.126 [2024-07-11 18:23:15.388289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:29.126 [2024-07-11 18:23:15.388300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:29.126 [2024-07-11 18:23:15.388314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:29.126 [2024-07-11 18:23:15.388325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:29.126 [2024-07-11 18:23:15.388338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:29.126 [2024-07-11 18:23:15.388350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:29.126 [2024-07-11 18:23:15.388367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:29.126 [2024-07-11 18:23:15.388378] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:29.126 [2024-07-11 18:23:15.388409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.126 [2024-07-11 18:23:15.388423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:29.126 [2024-07-11 18:23:15.388438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:29.126 [2024-07-11 18:23:15.388451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:29.126 [2024-07-11 18:23:15.388465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:29.126 [2024-07-11 18:23:15.388479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.126 [2024-07-11 18:23:15.388494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:29.126 [2024-07-11 18:23:15.388521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:16:29.126 [2024-07-11 18:23:15.388538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.126 [2024-07-11 18:23:15.388654] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:29.126 [2024-07-11 18:23:15.388689] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:31.664 [2024-07-11 18:23:18.018034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.018143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:31.664 [2024-07-11 18:23:18.018184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2629.391 ms 00:16:31.664 [2024-07-11 18:23:18.018199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.026054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.026149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.664 [2024-07-11 18:23:18.026177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.741 ms 00:16:31.664 [2024-07-11 18:23:18.026215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.026351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.026387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:31.664 [2024-07-11 18:23:18.026402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:31.664 [2024-07-11 18:23:18.026416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.044513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.044583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.664 [2024-07-11 18:23:18.044622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.999 ms 00:16:31.664 [2024-07-11 18:23:18.044637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.044730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.044751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.664 [2024-07-11 18:23:18.044781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:31.664 [2024-07-11 18:23:18.044812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.045239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.045273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.664 [2024-07-11 18:23:18.045292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:16:31.664 [2024-07-11 18:23:18.045306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.045473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.045505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.664 [2024-07-11 18:23:18.045519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:31.664 [2024-07-11 18:23:18.045533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.051451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.051509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.664 [2024-07-11 18:23:18.051528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.882 ms 00:16:31.664 [2024-07-11 18:23:18.051546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.664 [2024-07-11 18:23:18.060685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:31.664 [2024-07-11 18:23:18.075351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.664 [2024-07-11 18:23:18.075422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:31.664 [2024-07-11 18:23:18.075446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.668 ms 00:16:31.664 [2024-07-11 18:23:18.075463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.113458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.113551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:31.923 [2024-07-11 18:23:18.113593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.923 ms 00:16:31.923 [2024-07-11 18:23:18.113608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.113866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.113905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:31.923 [2024-07-11 18:23:18.113923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:16:31.923 [2024-07-11 18:23:18.113935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.117639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.117678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:31.923 [2024-07-11 18:23:18.117714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:16:31.923 [2024-07-11 18:23:18.117727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.121011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.121049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:31.923 [2024-07-11 18:23:18.121084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:16:31.923 [2024-07-11 18:23:18.121129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.121523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.121565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:31.923 [2024-07-11 18:23:18.121583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:16:31.923 [2024-07-11 18:23:18.121595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.149319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.149386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:31.923 [2024-07-11 18:23:18.149427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.655 ms 00:16:31.923 [2024-07-11 18:23:18.149440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.153846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.153888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:31.923 [2024-07-11 18:23:18.153928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.338 ms 00:16:31.923 [2024-07-11 18:23:18.153941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.157834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.157872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:31.923 [2024-07-11 18:23:18.157907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.771 ms 00:16:31.923 [2024-07-11 18:23:18.157918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.161818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.161860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.923 [2024-07-11 18:23:18.161895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.838 ms 00:16:31.923 [2024-07-11 18:23:18.161907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.161971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.162006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.923 [2024-07-11 18:23:18.162022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:31.923 [2024-07-11 18:23:18.162034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.162150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.923 [2024-07-11 18:23:18.162175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.923 [2024-07-11 18:23:18.162191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:31.923 [2024-07-11 18:23:18.162222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.923 [2024-07-11 18:23:18.163463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2785.475 ms, result 0 00:16:31.923 { 00:16:31.923 "name": "ftl0", 00:16:31.923 "uuid": "107398ee-71ab-42d9-b790-5254843a5a84" 00:16:31.923 } 00:16:31.923 18:23:18 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:31.923 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:31.923 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:31.923 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:16:31.923 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:31.923 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:31.923 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:32.182 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:32.441 [ 00:16:32.441 { 00:16:32.441 "name": "ftl0", 00:16:32.441 "aliases": [ 00:16:32.441 "107398ee-71ab-42d9-b790-5254843a5a84" 00:16:32.441 ], 00:16:32.441 "product_name": "FTL disk", 00:16:32.441 "block_size": 4096, 00:16:32.441 "num_blocks": 20971520, 00:16:32.441 "uuid": "107398ee-71ab-42d9-b790-5254843a5a84", 00:16:32.441 "assigned_rate_limits": { 00:16:32.441 "rw_ios_per_sec": 0, 00:16:32.441 "rw_mbytes_per_sec": 0, 00:16:32.441 "r_mbytes_per_sec": 0, 00:16:32.441 "w_mbytes_per_sec": 0 00:16:32.441 }, 00:16:32.441 "claimed": false, 00:16:32.441 "zoned": false, 00:16:32.441 "supported_io_types": { 00:16:32.441 "read": true, 00:16:32.441 "write": true, 00:16:32.441 "unmap": true, 00:16:32.441 "flush": true, 00:16:32.441 "reset": false, 00:16:32.441 "nvme_admin": false, 00:16:32.441 "nvme_io": false, 00:16:32.441 "nvme_io_md": false, 00:16:32.441 "write_zeroes": true, 00:16:32.441 "zcopy": false, 00:16:32.441 "get_zone_info": false, 00:16:32.441 "zone_management": false, 00:16:32.441 "zone_append": false, 00:16:32.441 "compare": false, 00:16:32.441 "compare_and_write": false, 00:16:32.441 "abort": false, 00:16:32.441 "seek_hole": false, 00:16:32.441 "seek_data": false, 00:16:32.441 "copy": false, 00:16:32.441 "nvme_iov_md": false 00:16:32.441 }, 00:16:32.441 "driver_specific": { 00:16:32.441 "ftl": { 00:16:32.441 "base_bdev": "c2c90dad-3a4c-4dab-9072-27e97af54815", 00:16:32.441 "cache": "nvc0n1p0" 00:16:32.441 } 00:16:32.441 } 00:16:32.441 } 00:16:32.441 ] 00:16:32.441 18:23:18 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:16:32.441 18:23:18 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:32.441 18:23:18 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:32.700 18:23:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:32.700 18:23:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:32.961 [2024-07-11 18:23:19.207444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.207545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:32.961 [2024-07-11 18:23:19.207568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:32.961 [2024-07-11 18:23:19.207584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.207631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:32.961 [2024-07-11 18:23:19.208121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.208153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:32.961 [2024-07-11 18:23:19.208183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:16:32.961 [2024-07-11 18:23:19.208196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.208649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.208676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:32.961 [2024-07-11 18:23:19.208693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:16:32.961 [2024-07-11 18:23:19.208705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.212020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.212072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:32.961 [2024-07-11 18:23:19.212105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:16:32.961 [2024-07-11 18:23:19.212137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.219028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.219076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:32.961 [2024-07-11 18:23:19.219119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:16:32.961 [2024-07-11 18:23:19.219133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.220529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.220600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:32.961 [2024-07-11 18:23:19.220622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.281 ms 00:16:32.961 [2024-07-11 18:23:19.220634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.224594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.224657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:32.961 [2024-07-11 18:23:19.224694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.900 ms 00:16:32.961 [2024-07-11 18:23:19.224706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.224892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.224912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:32.961 [2024-07-11 18:23:19.224928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:16:32.961 [2024-07-11 18:23:19.224940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.226821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.226860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:32.961 [2024-07-11 18:23:19.226878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:16:32.961 [2024-07-11 18:23:19.226890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.228378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.228420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:32.961 [2024-07-11 18:23:19.228441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:16:32.961 [2024-07-11 18:23:19.228453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.229536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.229574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.961 [2024-07-11 18:23:19.229593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:16:32.961 [2024-07-11 18:23:19.229604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.230813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.961 [2024-07-11 18:23:19.230883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.961 [2024-07-11 18:23:19.230901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:16:32.961 [2024-07-11 18:23:19.230912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.961 [2024-07-11 18:23:19.230968] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.961 [2024-07-11 18:23:19.230992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.961 [2024-07-11 18:23:19.231336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.231987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.962 [2024-07-11 18:23:19.232403] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.962 [2024-07-11 18:23:19.232422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 107398ee-71ab-42d9-b790-5254843a5a84 00:16:32.962 [2024-07-11 18:23:19.232434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.962 [2024-07-11 18:23:19.232447] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.962 [2024-07-11 18:23:19.232459] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.962 [2024-07-11 18:23:19.232472] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.962 [2024-07-11 18:23:19.232483] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.962 [2024-07-11 18:23:19.232496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.962 [2024-07-11 18:23:19.232507] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.962 [2024-07-11 18:23:19.232520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.962 [2024-07-11 18:23:19.232530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.962 [2024-07-11 18:23:19.232544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.962 [2024-07-11 18:23:19.232557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.962 [2024-07-11 18:23:19.232571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:16:32.962 [2024-07-11 18:23:19.232583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.962 [2024-07-11 18:23:19.234141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.962 [2024-07-11 18:23:19.234174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.962 [2024-07-11 18:23:19.234195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.514 ms 00:16:32.962 [2024-07-11 18:23:19.234207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.962 [2024-07-11 18:23:19.234342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.962 [2024-07-11 18:23:19.234362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.962 [2024-07-11 18:23:19.234393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:32.962 [2024-07-11 18:23:19.234406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.962 [2024-07-11 18:23:19.240047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.240166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.963 [2024-07-11 18:23:19.240189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.240202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.240279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.240298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.963 [2024-07-11 18:23:19.240336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.240348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.240462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.240483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.963 [2024-07-11 18:23:19.240501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.240513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.240551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.240566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.963 [2024-07-11 18:23:19.240579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.240591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.249413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.249500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.963 [2024-07-11 18:23:19.249523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.249535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.256558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.256632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.963 [2024-07-11 18:23:19.256669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.256685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.256813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.256832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.963 [2024-07-11 18:23:19.256850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.256862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.256948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.256966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.963 [2024-07-11 18:23:19.256980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.256992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.257156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.257179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.963 [2024-07-11 18:23:19.257194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.257205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.257282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.257302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:32.963 [2024-07-11 18:23:19.257317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.257329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.257391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.257425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.963 [2024-07-11 18:23:19.257444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.257469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.257541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.963 [2024-07-11 18:23:19.257572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.963 [2024-07-11 18:23:19.257588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.963 [2024-07-11 18:23:19.257600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.963 [2024-07-11 18:23:19.257795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.325 ms, result 0 00:16:32.963 true 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 89311 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 89311 ']' 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 89311 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89311 00:16:32.963 killing process with pid 89311 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89311' 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 89311 00:16:32.963 18:23:19 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 89311 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:36.250 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:36.250 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:36.250 fio-3.35 00:16:36.250 Starting 1 thread 00:16:41.529 00:16:41.529 test: (groupid=0, jobs=1): err= 0: pid=89482: Thu Jul 11 18:23:27 2024 00:16:41.529 read: IOPS=905, BW=60.1MiB/s (63.0MB/s)(255MiB/4235msec) 00:16:41.529 slat (nsec): min=5540, max=30340, avg=7360.79, stdev=2955.15 00:16:41.529 clat (usec): min=337, max=733, avg=492.72, stdev=50.34 00:16:41.529 lat (usec): min=343, max=739, avg=500.08, stdev=51.00 00:16:41.529 clat percentiles (usec): 00:16:41.529 | 1.00th=[ 383], 5.00th=[ 429], 10.00th=[ 445], 20.00th=[ 457], 00:16:41.529 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 482], 60.00th=[ 494], 00:16:41.529 | 70.00th=[ 510], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 586], 00:16:41.529 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 693], 99.95th=[ 701], 00:16:41.529 | 99.99th=[ 734] 00:16:41.529 write: IOPS=911, BW=60.5MiB/s (63.5MB/s)(256MiB/4230msec); 0 zone resets 00:16:41.529 slat (nsec): min=18688, max=82634, avg=24520.74, stdev=5036.77 00:16:41.529 clat (usec): min=387, max=1030, avg=562.24, stdev=63.93 00:16:41.529 lat (usec): min=410, max=1066, avg=586.76, stdev=64.29 00:16:41.529 clat percentiles (usec): 00:16:41.529 | 1.00th=[ 445], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 506], 00:16:41.529 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 570], 00:16:41.529 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 660], 00:16:41.529 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 922], 99.95th=[ 930], 00:16:41.529 | 99.99th=[ 1029] 00:16:41.529 bw ( KiB/s): min=60928, max=62560, per=100.00%, avg=62084.00, stdev=567.77, samples=8 00:16:41.529 iops : min= 896, max= 920, avg=913.00, stdev= 8.35, samples=8 00:16:41.529 lat (usec) : 500=40.97%, 750=58.27%, 1000=0.75% 00:16:41.529 lat (msec) : 2=0.01% 00:16:41.529 cpu : usr=98.94%, sys=0.26%, ctx=8, majf=0, minf=1326 00:16:41.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.529 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.529 00:16:41.529 Run status group 0 (all jobs): 00:16:41.529 READ: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=255MiB (267MB), run=4235-4235msec 00:16:41.529 WRITE: bw=60.5MiB/s (63.5MB/s), 60.5MiB/s-60.5MiB/s (63.5MB/s-63.5MB/s), io=256MiB (269MB), run=4230-4230msec 00:16:41.788 ----------------------------------------------------- 00:16:41.788 Suppressions used: 00:16:41.788 count bytes template 00:16:41.788 1 5 /usr/src/fio/parse.c 00:16:41.788 1 8 libtcmalloc_minimal.so 00:16:41.788 1 904 libcrypto.so 00:16:41.788 ----------------------------------------------------- 00:16:41.788 00:16:41.788 18:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:41.788 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.788 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:41.788 18:23:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:42.048 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:42.048 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:42.048 fio-3.35 00:16:42.048 Starting 2 threads 00:17:14.116 00:17:14.116 first_half: (groupid=0, jobs=1): err= 0: pid=89574: Thu Jul 11 18:23:58 2024 00:17:14.116 read: IOPS=2222, BW=8891KiB/s (9105kB/s)(255MiB/29351msec) 00:17:14.116 slat (nsec): min=4328, max=42917, avg=7386.10, stdev=1886.00 00:17:14.116 clat (usec): min=1061, max=322509, avg=45024.00, stdev=21304.86 00:17:14.116 lat (usec): min=1069, max=322514, avg=45031.38, stdev=21305.05 00:17:14.116 clat percentiles (msec): 00:17:14.116 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:17:14.116 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:17:14.116 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 52], 95.00th=[ 71], 00:17:14.116 | 99.00th=[ 163], 99.50th=[ 186], 99.90th=[ 220], 99.95th=[ 236], 00:17:14.116 | 99.99th=[ 313] 00:17:14.116 write: IOPS=2693, BW=10.5MiB/s (11.0MB/s)(256MiB/24329msec); 0 zone resets 00:17:14.116 slat (usec): min=5, max=283, avg= 9.35, stdev= 5.11 00:17:14.116 clat (usec): min=483, max=126277, avg=12451.98, stdev=22614.50 00:17:14.116 lat (usec): min=499, max=126284, avg=12461.33, stdev=22614.73 00:17:14.116 clat percentiles (usec): 00:17:14.116 | 1.00th=[ 1004], 5.00th=[ 1369], 10.00th=[ 1582], 20.00th=[ 1958], 00:17:14.116 | 30.00th=[ 2606], 40.00th=[ 4080], 50.00th=[ 5735], 60.00th=[ 6980], 00:17:14.116 | 70.00th=[ 8225], 80.00th=[ 12780], 90.00th=[ 17957], 95.00th=[ 89654], 00:17:14.116 | 99.00th=[103285], 99.50th=[107480], 99.90th=[114820], 99.95th=[123208], 00:17:14.116 | 99.99th=[125305] 00:17:14.116 bw ( KiB/s): min= 952, max=39096, per=100.00%, avg=20971.52, stdev=12480.44, samples=25 00:17:14.116 iops : min= 238, max= 9774, avg=5242.88, stdev=3120.11, samples=25 00:17:14.116 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.43% 00:17:14.116 lat (msec) : 2=10.05%, 4=9.43%, 10=18.24%, 20=7.90%, 50=45.38% 00:17:14.116 lat (msec) : 100=6.16%, 250=2.32%, 500=0.02% 00:17:14.116 cpu : usr=99.08%, sys=0.20%, ctx=56, majf=0, minf=5553 00:17:14.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.116 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.116 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.116 second_half: (groupid=0, jobs=1): err= 0: pid=89575: Thu Jul 11 18:23:58 2024 00:17:14.116 read: IOPS=2210, BW=8841KiB/s (9054kB/s)(255MiB/29551msec) 00:17:14.116 slat (usec): min=4, max=101, avg= 7.45, stdev= 1.86 00:17:14.116 clat (usec): min=1137, max=328126, avg=43824.86, stdev=23127.74 00:17:14.116 lat (usec): min=1147, max=328135, avg=43832.31, stdev=23127.93 00:17:14.116 clat percentiles (msec): 00:17:14.116 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 39], 00:17:14.116 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:17:14.116 | 70.00th=[ 42], 80.00th=[ 45], 90.00th=[ 47], 95.00th=[ 56], 00:17:14.116 | 99.00th=[ 182], 99.50th=[ 205], 99.90th=[ 239], 99.95th=[ 257], 00:17:14.116 | 99.99th=[ 321] 00:17:14.116 write: IOPS=2442, BW=9771KiB/s (10.0MB/s)(256MiB/26829msec); 0 zone resets 00:17:14.116 slat (usec): min=5, max=1041, avg= 9.40, stdev= 6.32 00:17:14.116 clat (usec): min=493, max=126955, avg=14010.20, stdev=24421.87 00:17:14.116 lat (usec): min=510, max=126962, avg=14019.60, stdev=24422.13 00:17:14.116 clat percentiles (usec): 00:17:14.116 | 1.00th=[ 1057], 5.00th=[ 1385], 10.00th=[ 1598], 20.00th=[ 1893], 00:17:14.116 | 30.00th=[ 2376], 40.00th=[ 3818], 50.00th=[ 5211], 60.00th=[ 6587], 00:17:14.116 | 70.00th=[ 8586], 80.00th=[ 14353], 90.00th=[ 39584], 95.00th=[ 92799], 00:17:14.116 | 99.00th=[104334], 99.50th=[107480], 99.90th=[124257], 99.95th=[125305], 00:17:14.116 | 99.99th=[126354] 00:17:14.116 bw ( KiB/s): min= 528, max=56536, per=89.42%, avg=17474.37, stdev=13656.82, samples=30 00:17:14.116 iops : min= 132, max=14134, avg=4368.57, stdev=3414.18, samples=30 00:17:14.116 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.29% 00:17:14.116 lat (msec) : 2=11.32%, 4=9.40%, 10=15.72%, 20=8.66%, 50=46.44% 00:17:14.116 lat (msec) : 100=5.72%, 250=2.39%, 500=0.03% 00:17:14.116 cpu : usr=99.08%, sys=0.19%, ctx=42, majf=0, minf=5587 00:17:14.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:14.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.116 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:14.116 issued rwts: total=65318,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:14.116 00:17:14.116 Run status group 0 (all jobs): 00:17:14.116 READ: bw=17.3MiB/s (18.1MB/s), 8841KiB/s-8891KiB/s (9054kB/s-9105kB/s), io=510MiB (535MB), run=29351-29551msec 00:17:14.116 WRITE: bw=19.1MiB/s (20.0MB/s), 9771KiB/s-10.5MiB/s (10.0MB/s-11.0MB/s), io=512MiB (537MB), run=24329-26829msec 00:17:14.116 ----------------------------------------------------- 00:17:14.116 Suppressions used: 00:17:14.116 count bytes template 00:17:14.116 2 10 /usr/src/fio/parse.c 00:17:14.116 2 192 /usr/src/fio/iolog.c 00:17:14.116 1 8 libtcmalloc_minimal.so 00:17:14.116 1 904 libcrypto.so 00:17:14.116 ----------------------------------------------------- 00:17:14.116 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:14.116 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:14.117 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:14.117 18:23:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:14.117 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:14.117 fio-3.35 00:17:14.117 Starting 1 thread 00:17:32.199 00:17:32.199 test: (groupid=0, jobs=1): err= 0: pid=89933: Thu Jul 11 18:24:16 2024 00:17:32.199 read: IOPS=6432, BW=25.1MiB/s (26.3MB/s)(255MiB/10136msec) 00:17:32.199 slat (usec): min=4, max=657, avg= 7.03, stdev= 4.41 00:17:32.199 clat (usec): min=772, max=39220, avg=19886.78, stdev=894.41 00:17:32.199 lat (usec): min=777, max=39228, avg=19893.82, stdev=894.36 00:17:32.199 clat percentiles (usec): 00:17:32.199 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19268], 20.00th=[19530], 00:17:32.199 | 30.00th=[19530], 40.00th=[19792], 50.00th=[19792], 60.00th=[20055], 00:17:32.199 | 70.00th=[20055], 80.00th=[20317], 90.00th=[20317], 95.00th=[20579], 00:17:32.199 | 99.00th=[22938], 99.50th=[23200], 99.90th=[28967], 99.95th=[34341], 00:17:32.199 | 99.99th=[38011] 00:17:32.199 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(256MiB/5557msec); 0 zone resets 00:17:32.199 slat (usec): min=5, max=560, avg= 9.50, stdev= 7.01 00:17:32.199 clat (usec): min=675, max=60356, avg=10791.53, stdev=13680.92 00:17:32.199 lat (usec): min=683, max=60364, avg=10801.03, stdev=13681.00 00:17:32.199 clat percentiles (usec): 00:17:32.199 | 1.00th=[ 947], 5.00th=[ 1156], 10.00th=[ 1287], 20.00th=[ 1483], 00:17:32.199 | 30.00th=[ 1696], 40.00th=[ 2180], 50.00th=[ 7046], 60.00th=[ 8029], 00:17:32.199 | 70.00th=[ 9241], 80.00th=[10814], 90.00th=[39584], 95.00th=[42206], 00:17:32.199 | 99.00th=[47973], 99.50th=[49546], 99.90th=[53216], 99.95th=[53740], 00:17:32.199 | 99.99th=[58459] 00:17:32.199 bw ( KiB/s): min= 3488, max=65528, per=92.62%, avg=43690.67, stdev=15821.13, samples=12 00:17:32.199 iops : min= 872, max=16382, avg=10922.67, stdev=3955.28, samples=12 00:17:32.199 lat (usec) : 750=0.02%, 1000=0.81% 00:17:32.199 lat (msec) : 2=18.33%, 4=1.79%, 10=16.53%, 20=37.35%, 50=24.95% 00:17:32.199 lat (msec) : 100=0.21% 00:17:32.199 cpu : usr=97.96%, sys=0.71%, ctx=79, majf=0, minf=5577 00:17:32.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:32.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.199 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:32.199 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:32.199 00:17:32.199 Run status group 0 (all jobs): 00:17:32.199 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=255MiB (267MB), run=10136-10136msec 00:17:32.199 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=256MiB (268MB), run=5557-5557msec 00:17:32.199 ----------------------------------------------------- 00:17:32.199 Suppressions used: 00:17:32.199 count bytes template 00:17:32.199 1 5 /usr/src/fio/parse.c 00:17:32.199 2 192 /usr/src/fio/iolog.c 00:17:32.199 1 8 libtcmalloc_minimal.so 00:17:32.199 1 904 libcrypto.so 00:17:32.199 ----------------------------------------------------- 00:17:32.199 00:17:32.199 18:24:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:32.199 18:24:16 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:32.199 18:24:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:32.199 18:24:16 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:32.199 18:24:16 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:32.199 18:24:16 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:32.199 Remove shared memory files 00:17:32.199 18:24:16 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:32.200 18:24:16 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:32.200 18:24:16 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid73980 /dev/shm/spdk_tgt_trace.pid88284 00:17:32.200 18:24:16 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:32.200 18:24:16 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:32.200 00:17:32.200 real 1m6.074s 00:17:32.200 user 2m32.613s 00:17:32.200 sys 0m3.486s 00:17:32.200 18:24:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:32.200 18:24:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:32.200 ************************************ 00:17:32.200 END TEST ftl_fio_basic 00:17:32.200 ************************************ 00:17:32.200 18:24:17 ftl -- common/autotest_common.sh@1142 -- # return 0 00:17:32.200 18:24:17 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:32.200 18:24:17 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:32.200 18:24:17 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:32.200 18:24:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:32.200 ************************************ 00:17:32.200 START TEST ftl_bdevperf 00:17:32.200 ************************************ 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:32.200 * Looking for test storage... 00:17:32.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=90177 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 90177 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 90177 ']' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.200 18:24:17 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:32.200 [2024-07-11 18:24:17.237321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:32.200 [2024-07-11 18:24:17.237530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90177 ] 00:17:32.200 [2024-07-11 18:24:17.381008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.200 [2024-07-11 18:24:17.416892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:32.200 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:32.459 { 00:17:32.459 "name": "nvme0n1", 00:17:32.459 "aliases": [ 00:17:32.459 "d8fb72c2-9db4-42a6-a6ca-7b90a83b4e5f" 00:17:32.459 ], 00:17:32.459 "product_name": "NVMe disk", 00:17:32.459 "block_size": 4096, 00:17:32.459 "num_blocks": 1310720, 00:17:32.459 "uuid": "d8fb72c2-9db4-42a6-a6ca-7b90a83b4e5f", 00:17:32.459 "assigned_rate_limits": { 00:17:32.459 "rw_ios_per_sec": 0, 00:17:32.459 "rw_mbytes_per_sec": 0, 00:17:32.459 "r_mbytes_per_sec": 0, 00:17:32.459 "w_mbytes_per_sec": 0 00:17:32.459 }, 00:17:32.459 "claimed": true, 00:17:32.459 "claim_type": "read_many_write_one", 00:17:32.459 "zoned": false, 00:17:32.459 "supported_io_types": { 00:17:32.459 "read": true, 00:17:32.459 "write": true, 00:17:32.459 "unmap": true, 00:17:32.459 "flush": true, 00:17:32.459 "reset": true, 00:17:32.459 "nvme_admin": true, 00:17:32.459 "nvme_io": true, 00:17:32.459 "nvme_io_md": false, 00:17:32.459 "write_zeroes": true, 00:17:32.459 "zcopy": false, 00:17:32.459 "get_zone_info": false, 00:17:32.459 "zone_management": false, 00:17:32.459 "zone_append": false, 00:17:32.459 "compare": true, 00:17:32.459 "compare_and_write": false, 00:17:32.459 "abort": true, 00:17:32.459 "seek_hole": false, 00:17:32.459 "seek_data": false, 00:17:32.459 "copy": true, 00:17:32.459 "nvme_iov_md": false 00:17:32.459 }, 00:17:32.459 "driver_specific": { 00:17:32.459 "nvme": [ 00:17:32.459 { 00:17:32.459 "pci_address": "0000:00:11.0", 00:17:32.459 "trid": { 00:17:32.459 "trtype": "PCIe", 00:17:32.459 "traddr": "0000:00:11.0" 00:17:32.459 }, 00:17:32.459 "ctrlr_data": { 00:17:32.459 "cntlid": 0, 00:17:32.459 "vendor_id": "0x1b36", 00:17:32.459 "model_number": "QEMU NVMe Ctrl", 00:17:32.459 "serial_number": "12341", 00:17:32.459 "firmware_revision": "8.0.0", 00:17:32.459 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:32.459 "oacs": { 00:17:32.459 "security": 0, 00:17:32.459 "format": 1, 00:17:32.459 "firmware": 0, 00:17:32.459 "ns_manage": 1 00:17:32.459 }, 00:17:32.459 "multi_ctrlr": false, 00:17:32.459 "ana_reporting": false 00:17:32.459 }, 00:17:32.459 "vs": { 00:17:32.459 "nvme_version": "1.4" 00:17:32.459 }, 00:17:32.459 "ns_data": { 00:17:32.459 "id": 1, 00:17:32.459 "can_share": false 00:17:32.459 } 00:17:32.459 } 00:17:32.459 ], 00:17:32.459 "mp_policy": "active_passive" 00:17:32.459 } 00:17:32.459 } 00:17:32.459 ]' 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:32.459 18:24:18 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:32.718 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ea45e81d-7a36-4794-87bb-74268bf41b3c 00:17:32.718 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:32.718 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea45e81d-7a36-4794-87bb-74268bf41b3c 00:17:32.977 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:33.236 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=a4370aaa-e654-4237-9815-69973153c391 00:17:33.236 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a4370aaa-e654-4237-9815-69973153c391 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:33.494 18:24:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:34.061 { 00:17:34.061 "name": "98a18518-4c3e-4f75-a01c-d057a124cd07", 00:17:34.061 "aliases": [ 00:17:34.061 "lvs/nvme0n1p0" 00:17:34.061 ], 00:17:34.061 "product_name": "Logical Volume", 00:17:34.061 "block_size": 4096, 00:17:34.061 "num_blocks": 26476544, 00:17:34.061 "uuid": "98a18518-4c3e-4f75-a01c-d057a124cd07", 00:17:34.061 "assigned_rate_limits": { 00:17:34.061 "rw_ios_per_sec": 0, 00:17:34.061 "rw_mbytes_per_sec": 0, 00:17:34.061 "r_mbytes_per_sec": 0, 00:17:34.061 "w_mbytes_per_sec": 0 00:17:34.061 }, 00:17:34.061 "claimed": false, 00:17:34.061 "zoned": false, 00:17:34.061 "supported_io_types": { 00:17:34.061 "read": true, 00:17:34.061 "write": true, 00:17:34.061 "unmap": true, 00:17:34.061 "flush": false, 00:17:34.061 "reset": true, 00:17:34.061 "nvme_admin": false, 00:17:34.061 "nvme_io": false, 00:17:34.061 "nvme_io_md": false, 00:17:34.061 "write_zeroes": true, 00:17:34.061 "zcopy": false, 00:17:34.061 "get_zone_info": false, 00:17:34.061 "zone_management": false, 00:17:34.061 "zone_append": false, 00:17:34.061 "compare": false, 00:17:34.061 "compare_and_write": false, 00:17:34.061 "abort": false, 00:17:34.061 "seek_hole": true, 00:17:34.061 "seek_data": true, 00:17:34.061 "copy": false, 00:17:34.061 "nvme_iov_md": false 00:17:34.061 }, 00:17:34.061 "driver_specific": { 00:17:34.061 "lvol": { 00:17:34.061 "lvol_store_uuid": "a4370aaa-e654-4237-9815-69973153c391", 00:17:34.061 "base_bdev": "nvme0n1", 00:17:34.061 "thin_provision": true, 00:17:34.061 "num_allocated_clusters": 0, 00:17:34.061 "snapshot": false, 00:17:34.061 "clone": false, 00:17:34.061 "esnap_clone": false 00:17:34.061 } 00:17:34.061 } 00:17:34.061 } 00:17:34.061 ]' 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:34.061 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:34.320 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:34.608 { 00:17:34.608 "name": "98a18518-4c3e-4f75-a01c-d057a124cd07", 00:17:34.608 "aliases": [ 00:17:34.608 "lvs/nvme0n1p0" 00:17:34.608 ], 00:17:34.608 "product_name": "Logical Volume", 00:17:34.608 "block_size": 4096, 00:17:34.608 "num_blocks": 26476544, 00:17:34.608 "uuid": "98a18518-4c3e-4f75-a01c-d057a124cd07", 00:17:34.608 "assigned_rate_limits": { 00:17:34.608 "rw_ios_per_sec": 0, 00:17:34.608 "rw_mbytes_per_sec": 0, 00:17:34.608 "r_mbytes_per_sec": 0, 00:17:34.608 "w_mbytes_per_sec": 0 00:17:34.608 }, 00:17:34.608 "claimed": false, 00:17:34.608 "zoned": false, 00:17:34.608 "supported_io_types": { 00:17:34.608 "read": true, 00:17:34.608 "write": true, 00:17:34.608 "unmap": true, 00:17:34.608 "flush": false, 00:17:34.608 "reset": true, 00:17:34.608 "nvme_admin": false, 00:17:34.608 "nvme_io": false, 00:17:34.608 "nvme_io_md": false, 00:17:34.608 "write_zeroes": true, 00:17:34.608 "zcopy": false, 00:17:34.608 "get_zone_info": false, 00:17:34.608 "zone_management": false, 00:17:34.608 "zone_append": false, 00:17:34.608 "compare": false, 00:17:34.608 "compare_and_write": false, 00:17:34.608 "abort": false, 00:17:34.608 "seek_hole": true, 00:17:34.608 "seek_data": true, 00:17:34.608 "copy": false, 00:17:34.608 "nvme_iov_md": false 00:17:34.608 }, 00:17:34.608 "driver_specific": { 00:17:34.608 "lvol": { 00:17:34.608 "lvol_store_uuid": "a4370aaa-e654-4237-9815-69973153c391", 00:17:34.608 "base_bdev": "nvme0n1", 00:17:34.608 "thin_provision": true, 00:17:34.608 "num_allocated_clusters": 0, 00:17:34.608 "snapshot": false, 00:17:34.608 "clone": false, 00:17:34.608 "esnap_clone": false 00:17:34.608 } 00:17:34.608 } 00:17:34.608 } 00:17:34.608 ]' 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:34.608 18:24:20 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:34.871 18:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:34.871 18:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:34.871 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:34.871 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:34.871 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:34.871 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:34.871 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 98a18518-4c3e-4f75-a01c-d057a124cd07 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:35.130 { 00:17:35.130 "name": "98a18518-4c3e-4f75-a01c-d057a124cd07", 00:17:35.130 "aliases": [ 00:17:35.130 "lvs/nvme0n1p0" 00:17:35.130 ], 00:17:35.130 "product_name": "Logical Volume", 00:17:35.130 "block_size": 4096, 00:17:35.130 "num_blocks": 26476544, 00:17:35.130 "uuid": "98a18518-4c3e-4f75-a01c-d057a124cd07", 00:17:35.130 "assigned_rate_limits": { 00:17:35.130 "rw_ios_per_sec": 0, 00:17:35.130 "rw_mbytes_per_sec": 0, 00:17:35.130 "r_mbytes_per_sec": 0, 00:17:35.130 "w_mbytes_per_sec": 0 00:17:35.130 }, 00:17:35.130 "claimed": false, 00:17:35.130 "zoned": false, 00:17:35.130 "supported_io_types": { 00:17:35.130 "read": true, 00:17:35.130 "write": true, 00:17:35.130 "unmap": true, 00:17:35.130 "flush": false, 00:17:35.130 "reset": true, 00:17:35.130 "nvme_admin": false, 00:17:35.130 "nvme_io": false, 00:17:35.130 "nvme_io_md": false, 00:17:35.130 "write_zeroes": true, 00:17:35.130 "zcopy": false, 00:17:35.130 "get_zone_info": false, 00:17:35.130 "zone_management": false, 00:17:35.130 "zone_append": false, 00:17:35.130 "compare": false, 00:17:35.130 "compare_and_write": false, 00:17:35.130 "abort": false, 00:17:35.130 "seek_hole": true, 00:17:35.130 "seek_data": true, 00:17:35.130 "copy": false, 00:17:35.130 "nvme_iov_md": false 00:17:35.130 }, 00:17:35.130 "driver_specific": { 00:17:35.130 "lvol": { 00:17:35.130 "lvol_store_uuid": "a4370aaa-e654-4237-9815-69973153c391", 00:17:35.130 "base_bdev": "nvme0n1", 00:17:35.130 "thin_provision": true, 00:17:35.130 "num_allocated_clusters": 0, 00:17:35.130 "snapshot": false, 00:17:35.130 "clone": false, 00:17:35.130 "esnap_clone": false 00:17:35.130 } 00:17:35.130 } 00:17:35.130 } 00:17:35.130 ]' 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:35.130 18:24:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 98a18518-4c3e-4f75-a01c-d057a124cd07 -c nvc0n1p0 --l2p_dram_limit 20 00:17:35.390 [2024-07-11 18:24:21.754218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.390 [2024-07-11 18:24:21.754298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:35.390 [2024-07-11 18:24:21.754343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:35.390 [2024-07-11 18:24:21.754367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.390 [2024-07-11 18:24:21.754444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.390 [2024-07-11 18:24:21.754473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.390 [2024-07-11 18:24:21.754488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:35.390 [2024-07-11 18:24:21.754504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.390 [2024-07-11 18:24:21.754573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:35.390 [2024-07-11 18:24:21.754927] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:35.390 [2024-07-11 18:24:21.754968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.390 [2024-07-11 18:24:21.754994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.390 [2024-07-11 18:24:21.755008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:17:35.390 [2024-07-11 18:24:21.755021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.390 [2024-07-11 18:24:21.755225] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bacd650b-a7bc-459b-ad8d-b9d7deea91a4 00:17:35.390 [2024-07-11 18:24:21.756253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.390 [2024-07-11 18:24:21.756297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:35.390 [2024-07-11 18:24:21.756319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:35.390 [2024-07-11 18:24:21.756332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.390 [2024-07-11 18:24:21.760970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.390 [2024-07-11 18:24:21.761031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.390 [2024-07-11 18:24:21.761065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:17:35.390 [2024-07-11 18:24:21.761077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.390 [2024-07-11 18:24:21.761194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.390 [2024-07-11 18:24:21.761215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.390 [2024-07-11 18:24:21.761230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:35.390 [2024-07-11 18:24:21.761252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.390 [2024-07-11 18:24:21.761319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.390 [2024-07-11 18:24:21.761338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:35.390 [2024-07-11 18:24:21.761367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:35.390 [2024-07-11 18:24:21.761381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.391 [2024-07-11 18:24:21.761420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:35.391 [2024-07-11 18:24:21.762988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.391 [2024-07-11 18:24:21.763062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.391 [2024-07-11 18:24:21.763100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:17:35.391 [2024-07-11 18:24:21.763118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.391 [2024-07-11 18:24:21.763172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.391 [2024-07-11 18:24:21.763191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:35.391 [2024-07-11 18:24:21.763204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:35.391 [2024-07-11 18:24:21.763220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.391 [2024-07-11 18:24:21.763243] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:35.391 [2024-07-11 18:24:21.763419] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:35.391 [2024-07-11 18:24:21.763445] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:35.391 [2024-07-11 18:24:21.763465] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:35.391 [2024-07-11 18:24:21.763482] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:35.391 [2024-07-11 18:24:21.763498] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:35.391 [2024-07-11 18:24:21.763511] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:35.391 [2024-07-11 18:24:21.763527] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:35.391 [2024-07-11 18:24:21.763538] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:35.391 [2024-07-11 18:24:21.763560] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:35.391 [2024-07-11 18:24:21.763573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.391 [2024-07-11 18:24:21.763594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:35.391 [2024-07-11 18:24:21.763606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:17:35.391 [2024-07-11 18:24:21.763619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.391 [2024-07-11 18:24:21.763714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.391 [2024-07-11 18:24:21.763735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:35.391 [2024-07-11 18:24:21.763748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:35.391 [2024-07-11 18:24:21.763763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.391 [2024-07-11 18:24:21.763866] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:35.391 [2024-07-11 18:24:21.763897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:35.391 [2024-07-11 18:24:21.763912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.391 [2024-07-11 18:24:21.763926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.391 [2024-07-11 18:24:21.763940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:35.391 [2024-07-11 18:24:21.763955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:35.391 [2024-07-11 18:24:21.763967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:35.391 [2024-07-11 18:24:21.763980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:35.391 [2024-07-11 18:24:21.763991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.391 [2024-07-11 18:24:21.764014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:35.391 [2024-07-11 18:24:21.764027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:35.391 [2024-07-11 18:24:21.764037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.391 [2024-07-11 18:24:21.764052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:35.391 [2024-07-11 18:24:21.764063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:35.391 [2024-07-11 18:24:21.764076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:35.391 [2024-07-11 18:24:21.764120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:35.391 [2024-07-11 18:24:21.764132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:35.391 [2024-07-11 18:24:21.764156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.391 [2024-07-11 18:24:21.764182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:35.391 [2024-07-11 18:24:21.764195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.391 [2024-07-11 18:24:21.764220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:35.391 [2024-07-11 18:24:21.764231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.391 [2024-07-11 18:24:21.764254] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:35.391 [2024-07-11 18:24:21.764269] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.391 [2024-07-11 18:24:21.764292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:35.391 [2024-07-11 18:24:21.764303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.391 [2024-07-11 18:24:21.764327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:35.391 [2024-07-11 18:24:21.764339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:35.391 [2024-07-11 18:24:21.764350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.391 [2024-07-11 18:24:21.764363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:35.391 [2024-07-11 18:24:21.764374] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:35.391 [2024-07-11 18:24:21.764387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:35.391 [2024-07-11 18:24:21.764410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:35.391 [2024-07-11 18:24:21.764421] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764433] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:35.391 [2024-07-11 18:24:21.764444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:35.391 [2024-07-11 18:24:21.764459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.391 [2024-07-11 18:24:21.764471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.391 [2024-07-11 18:24:21.764495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:35.391 [2024-07-11 18:24:21.764506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:35.392 [2024-07-11 18:24:21.764521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:35.392 [2024-07-11 18:24:21.764532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:35.392 [2024-07-11 18:24:21.764545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:35.392 [2024-07-11 18:24:21.764556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:35.392 [2024-07-11 18:24:21.764643] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:35.392 [2024-07-11 18:24:21.764663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.392 [2024-07-11 18:24:21.764679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:35.392 [2024-07-11 18:24:21.764691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:35.392 [2024-07-11 18:24:21.764704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:35.392 [2024-07-11 18:24:21.764716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:35.392 [2024-07-11 18:24:21.764729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:35.392 [2024-07-11 18:24:21.764741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:35.392 [2024-07-11 18:24:21.764757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:35.392 [2024-07-11 18:24:21.764769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:35.392 [2024-07-11 18:24:21.764782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:35.392 [2024-07-11 18:24:21.764794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:35.392 [2024-07-11 18:24:21.764808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:35.392 [2024-07-11 18:24:21.764819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:35.392 [2024-07-11 18:24:21.764833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:35.392 [2024-07-11 18:24:21.764845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:35.392 [2024-07-11 18:24:21.764871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:35.392 [2024-07-11 18:24:21.764884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.392 [2024-07-11 18:24:21.764900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:35.392 [2024-07-11 18:24:21.764912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:35.392 [2024-07-11 18:24:21.764926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:35.392 [2024-07-11 18:24:21.764938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:35.392 [2024-07-11 18:24:21.764953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.392 [2024-07-11 18:24:21.764965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:35.392 [2024-07-11 18:24:21.764981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:17:35.392 [2024-07-11 18:24:21.764993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.392 [2024-07-11 18:24:21.765043] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:35.392 [2024-07-11 18:24:21.765061] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:37.924 [2024-07-11 18:24:23.918823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.918934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:37.924 [2024-07-11 18:24:23.918974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2153.779 ms 00:17:37.924 [2024-07-11 18:24:23.918986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.934922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.934999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.924 [2024-07-11 18:24:23.935046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.845 ms 00:17:37.924 [2024-07-11 18:24:23.935063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.935253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.935276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:37.924 [2024-07-11 18:24:23.935295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:37.924 [2024-07-11 18:24:23.935319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.944919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.944980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.924 [2024-07-11 18:24:23.945015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.501 ms 00:17:37.924 [2024-07-11 18:24:23.945030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.945115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.945140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.924 [2024-07-11 18:24:23.945159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:37.924 [2024-07-11 18:24:23.945185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.945612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.945650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.924 [2024-07-11 18:24:23.945672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:17:37.924 [2024-07-11 18:24:23.945687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.945925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.945966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.924 [2024-07-11 18:24:23.945986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:37.924 [2024-07-11 18:24:23.946001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.950997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.951050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.924 [2024-07-11 18:24:23.951123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.963 ms 00:17:37.924 [2024-07-11 18:24:23.951154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:23.959574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:37.924 [2024-07-11 18:24:23.964576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:23.964630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:37.924 [2024-07-11 18:24:23.964663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.343 ms 00:17:37.924 [2024-07-11 18:24:23.964686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:24.012857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:24.012972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:37.924 [2024-07-11 18:24:24.012996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.129 ms 00:17:37.924 [2024-07-11 18:24:24.013013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:24.013296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:24.013322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.924 [2024-07-11 18:24:24.013337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:37.924 [2024-07-11 18:24:24.013351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:24.017109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:24.017188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:37.924 [2024-07-11 18:24:24.017222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:17:37.924 [2024-07-11 18:24:24.017239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:24.020464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:24.020538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:37.924 [2024-07-11 18:24:24.020572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:17:37.924 [2024-07-11 18:24:24.020584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.924 [2024-07-11 18:24:24.020967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.924 [2024-07-11 18:24:24.021021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.924 [2024-07-11 18:24:24.021046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:17:37.924 [2024-07-11 18:24:24.021072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.925 [2024-07-11 18:24:24.054807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.925 [2024-07-11 18:24:24.054936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:37.925 [2024-07-11 18:24:24.054973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.674 ms 00:17:37.925 [2024-07-11 18:24:24.054989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.925 [2024-07-11 18:24:24.059546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.925 [2024-07-11 18:24:24.059610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:37.925 [2024-07-11 18:24:24.059644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:17:37.925 [2024-07-11 18:24:24.059657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.925 [2024-07-11 18:24:24.063420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.925 [2024-07-11 18:24:24.063498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:37.925 [2024-07-11 18:24:24.063530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.718 ms 00:17:37.925 [2024-07-11 18:24:24.063543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.925 [2024-07-11 18:24:24.067697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.925 [2024-07-11 18:24:24.067761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.925 [2024-07-11 18:24:24.067794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:17:37.925 [2024-07-11 18:24:24.067811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.925 [2024-07-11 18:24:24.067860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.925 [2024-07-11 18:24:24.067882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.925 [2024-07-11 18:24:24.067905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:37.925 [2024-07-11 18:24:24.067920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.925 [2024-07-11 18:24:24.068011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.925 [2024-07-11 18:24:24.068031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.925 [2024-07-11 18:24:24.068045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:37.925 [2024-07-11 18:24:24.068061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.925 [2024-07-11 18:24:24.069197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2314.508 ms, result 0 00:17:37.925 { 00:17:37.925 "name": "ftl0", 00:17:37.925 "uuid": "bacd650b-a7bc-459b-ad8d-b9d7deea91a4" 00:17:37.925 } 00:17:37.925 18:24:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:37.925 18:24:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:37.925 18:24:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:38.182 18:24:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:38.182 [2024-07-11 18:24:24.474326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:38.182 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:38.182 Zero copy mechanism will not be used. 00:17:38.182 Running I/O for 4 seconds... 00:17:42.366 00:17:42.366 Latency(us) 00:17:42.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.366 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:42.366 ftl0 : 4.00 1674.96 111.23 0.00 0.00 625.67 249.48 1050.07 00:17:42.366 =================================================================================================================== 00:17:42.366 Total : 1674.96 111.23 0.00 0.00 625.67 249.48 1050.07 00:17:42.366 [2024-07-11 18:24:28.481644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:42.366 0 00:17:42.366 18:24:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:42.366 [2024-07-11 18:24:28.614456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:42.366 Running I/O for 4 seconds... 00:17:46.549 00:17:46.549 Latency(us) 00:17:46.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.549 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.549 ftl0 : 4.02 7466.22 29.16 0.00 0.00 17093.96 323.96 38844.97 00:17:46.549 =================================================================================================================== 00:17:46.549 Total : 7466.22 29.16 0.00 0.00 17093.96 0.00 38844.97 00:17:46.549 [2024-07-11 18:24:32.642891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:46.549 0 00:17:46.549 18:24:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:46.549 [2024-07-11 18:24:32.777498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:46.549 Running I/O for 4 seconds... 00:17:50.735 00:17:50.735 Latency(us) 00:17:50.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.735 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:50.735 Verification LBA range: start 0x0 length 0x1400000 00:17:50.735 ftl0 : 4.01 5490.12 21.45 0.00 0.00 23226.93 340.71 33840.41 00:17:50.735 =================================================================================================================== 00:17:50.735 Total : 5490.12 21.45 0.00 0.00 23226.93 0.00 33840.41 00:17:50.735 [2024-07-11 18:24:36.797929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:50.735 0 00:17:50.735 18:24:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:50.735 [2024-07-11 18:24:37.046999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.735 [2024-07-11 18:24:37.047084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:50.735 [2024-07-11 18:24:37.047118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.735 [2024-07-11 18:24:37.047135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.735 [2024-07-11 18:24:37.047165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.735 [2024-07-11 18:24:37.047574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.735 [2024-07-11 18:24:37.047599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:50.735 [2024-07-11 18:24:37.047615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:50.735 [2024-07-11 18:24:37.047626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.735 [2024-07-11 18:24:37.049189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.735 [2024-07-11 18:24:37.049258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:50.735 [2024-07-11 18:24:37.049291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:17:50.735 [2024-07-11 18:24:37.049303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.223235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.223303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:50.995 [2024-07-11 18:24:37.223326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 173.889 ms 00:17:50.995 [2024-07-11 18:24:37.223339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.230036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.230105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:50.995 [2024-07-11 18:24:37.230123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:17:50.995 [2024-07-11 18:24:37.230134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.231719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.231769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:50.995 [2024-07-11 18:24:37.231802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:17:50.995 [2024-07-11 18:24:37.231813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.236041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.236110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:50.995 [2024-07-11 18:24:37.236146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:17:50.995 [2024-07-11 18:24:37.236190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.236383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.236403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:50.995 [2024-07-11 18:24:37.236417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:50.995 [2024-07-11 18:24:37.236429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.238223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.238272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:50.995 [2024-07-11 18:24:37.238316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.763 ms 00:17:50.995 [2024-07-11 18:24:37.238326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.239922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.240002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:50.995 [2024-07-11 18:24:37.240033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:17:50.995 [2024-07-11 18:24:37.240043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.995 [2024-07-11 18:24:37.241354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.995 [2024-07-11 18:24:37.241386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:50.996 [2024-07-11 18:24:37.241417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:17:50.996 [2024-07-11 18:24:37.241428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.996 [2024-07-11 18:24:37.242673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.996 [2024-07-11 18:24:37.242712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:50.996 [2024-07-11 18:24:37.242729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:17:50.996 [2024-07-11 18:24:37.242741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.996 [2024-07-11 18:24:37.242783] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:50.996 [2024-07-11 18:24:37.242805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.242993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.243994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.244005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:50.996 [2024-07-11 18:24:37.244018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:50.997 [2024-07-11 18:24:37.244216] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:50.997 [2024-07-11 18:24:37.244230] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bacd650b-a7bc-459b-ad8d-b9d7deea91a4 00:17:50.997 [2024-07-11 18:24:37.244243] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:50.997 [2024-07-11 18:24:37.244255] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:50.997 [2024-07-11 18:24:37.244266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:50.997 [2024-07-11 18:24:37.244279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:50.997 [2024-07-11 18:24:37.244290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:50.997 [2024-07-11 18:24:37.244306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:50.997 [2024-07-11 18:24:37.244318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:50.997 [2024-07-11 18:24:37.244330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:50.997 [2024-07-11 18:24:37.244340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:50.997 [2024-07-11 18:24:37.244354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.997 [2024-07-11 18:24:37.244367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:50.997 [2024-07-11 18:24:37.244397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:17:50.997 [2024-07-11 18:24:37.244409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.245755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.997 [2024-07-11 18:24:37.245801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:50.997 [2024-07-11 18:24:37.245817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:17:50.997 [2024-07-11 18:24:37.245829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.245925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.997 [2024-07-11 18:24:37.245947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:50.997 [2024-07-11 18:24:37.245962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:50.997 [2024-07-11 18:24:37.245973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.250524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.250606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.997 [2024-07-11 18:24:37.250625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.250637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.250702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.250718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.997 [2024-07-11 18:24:37.250733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.250744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.250830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.250885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.997 [2024-07-11 18:24:37.250916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.250927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.250952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.250975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.997 [2024-07-11 18:24:37.251004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.251015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.258686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.258757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.997 [2024-07-11 18:24:37.258801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.258813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.264959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.265025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.997 [2024-07-11 18:24:37.265059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.265070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.265230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.265250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.997 [2024-07-11 18:24:37.265264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.265290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.265351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.265371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.997 [2024-07-11 18:24:37.265385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.265398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.265500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.265524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.997 [2024-07-11 18:24:37.265539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.265550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.265600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.265617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:50.997 [2024-07-11 18:24:37.265631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.265644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.265690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.265705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.997 [2024-07-11 18:24:37.265718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.265729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.265784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.997 [2024-07-11 18:24:37.265817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.997 [2024-07-11 18:24:37.265834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.997 [2024-07-11 18:24:37.265848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.997 [2024-07-11 18:24:37.265991] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 218.946 ms, result 0 00:17:50.997 true 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 90177 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 90177 ']' 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 90177 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90177 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:50.997 killing process with pid 90177 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90177' 00:17:50.997 Received shutdown signal, test time was about 4.000000 seconds 00:17:50.997 00:17:50.997 Latency(us) 00:17:50.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.997 =================================================================================================================== 00:17:50.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 90177 00:17:50.997 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 90177 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:54.290 Remove shared memory files 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:54.290 00:17:54.290 real 0m23.182s 00:17:54.290 user 0m26.737s 00:17:54.290 sys 0m1.038s 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.290 ************************************ 00:17:54.290 END TEST ftl_bdevperf 00:17:54.290 ************************************ 00:17:54.290 18:24:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:54.290 18:24:40 ftl -- common/autotest_common.sh@1142 -- # return 0 00:17:54.290 18:24:40 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:54.290 18:24:40 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:54.290 18:24:40 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.290 18:24:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:54.290 ************************************ 00:17:54.290 START TEST ftl_trim 00:17:54.290 ************************************ 00:17:54.290 18:24:40 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:54.290 * Looking for test storage... 00:17:54.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=90516 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 90516 00:17:54.290 18:24:40 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:54.290 18:24:40 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 90516 ']' 00:17:54.290 18:24:40 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.290 18:24:40 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.290 18:24:40 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.290 18:24:40 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.290 18:24:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:54.291 [2024-07-11 18:24:40.509000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:54.291 [2024-07-11 18:24:40.509205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90516 ] 00:17:54.291 [2024-07-11 18:24:40.659167] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.550 [2024-07-11 18:24:40.704804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.550 [2024-07-11 18:24:40.704917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.550 [2024-07-11 18:24:40.704992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.128 18:24:41 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.128 18:24:41 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:17:55.128 18:24:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:55.128 18:24:41 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:55.128 18:24:41 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:55.128 18:24:41 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:55.128 18:24:41 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:55.128 18:24:41 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:55.702 18:24:41 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:55.702 18:24:41 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:55.702 18:24:41 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:55.702 18:24:41 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:55.702 18:24:41 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:55.702 18:24:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:55.702 18:24:41 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:55.702 18:24:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:55.702 18:24:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:55.702 { 00:17:55.702 "name": "nvme0n1", 00:17:55.702 "aliases": [ 00:17:55.702 "8a7706d6-acb6-4960-8abd-75b7728aa107" 00:17:55.702 ], 00:17:55.702 "product_name": "NVMe disk", 00:17:55.702 "block_size": 4096, 00:17:55.702 "num_blocks": 1310720, 00:17:55.702 "uuid": "8a7706d6-acb6-4960-8abd-75b7728aa107", 00:17:55.702 "assigned_rate_limits": { 00:17:55.702 "rw_ios_per_sec": 0, 00:17:55.702 "rw_mbytes_per_sec": 0, 00:17:55.702 "r_mbytes_per_sec": 0, 00:17:55.702 "w_mbytes_per_sec": 0 00:17:55.702 }, 00:17:55.702 "claimed": true, 00:17:55.702 "claim_type": "read_many_write_one", 00:17:55.702 "zoned": false, 00:17:55.702 "supported_io_types": { 00:17:55.702 "read": true, 00:17:55.702 "write": true, 00:17:55.702 "unmap": true, 00:17:55.702 "flush": true, 00:17:55.702 "reset": true, 00:17:55.702 "nvme_admin": true, 00:17:55.702 "nvme_io": true, 00:17:55.702 "nvme_io_md": false, 00:17:55.702 "write_zeroes": true, 00:17:55.702 "zcopy": false, 00:17:55.702 "get_zone_info": false, 00:17:55.702 "zone_management": false, 00:17:55.702 "zone_append": false, 00:17:55.702 "compare": true, 00:17:55.702 "compare_and_write": false, 00:17:55.702 "abort": true, 00:17:55.702 "seek_hole": false, 00:17:55.702 "seek_data": false, 00:17:55.702 "copy": true, 00:17:55.702 "nvme_iov_md": false 00:17:55.702 }, 00:17:55.702 "driver_specific": { 00:17:55.702 "nvme": [ 00:17:55.702 { 00:17:55.702 "pci_address": "0000:00:11.0", 00:17:55.702 "trid": { 00:17:55.702 "trtype": "PCIe", 00:17:55.702 "traddr": "0000:00:11.0" 00:17:55.702 }, 00:17:55.702 "ctrlr_data": { 00:17:55.702 "cntlid": 0, 00:17:55.702 "vendor_id": "0x1b36", 00:17:55.702 "model_number": "QEMU NVMe Ctrl", 00:17:55.702 "serial_number": "12341", 00:17:55.702 "firmware_revision": "8.0.0", 00:17:55.702 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:55.702 "oacs": { 00:17:55.702 "security": 0, 00:17:55.702 "format": 1, 00:17:55.702 "firmware": 0, 00:17:55.702 "ns_manage": 1 00:17:55.702 }, 00:17:55.702 "multi_ctrlr": false, 00:17:55.702 "ana_reporting": false 00:17:55.702 }, 00:17:55.702 "vs": { 00:17:55.702 "nvme_version": "1.4" 00:17:55.702 }, 00:17:55.702 "ns_data": { 00:17:55.702 "id": 1, 00:17:55.702 "can_share": false 00:17:55.702 } 00:17:55.702 } 00:17:55.702 ], 00:17:55.702 "mp_policy": "active_passive" 00:17:55.702 } 00:17:55.702 } 00:17:55.702 ]' 00:17:55.702 18:24:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:55.959 18:24:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:55.960 18:24:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:55.960 18:24:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:55.960 18:24:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:55.960 18:24:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:55.960 18:24:42 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:55.960 18:24:42 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:55.960 18:24:42 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:55.960 18:24:42 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:55.960 18:24:42 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:56.217 18:24:42 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=a4370aaa-e654-4237-9815-69973153c391 00:17:56.217 18:24:42 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:56.217 18:24:42 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4370aaa-e654-4237-9815-69973153c391 00:17:56.474 18:24:42 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:56.731 18:24:43 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=4b925f1c-7b4f-46d1-88e7-3c9b99f88dea 00:17:56.731 18:24:43 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4b925f1c-7b4f-46d1-88e7-3c9b99f88dea 00:17:56.988 18:24:43 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:56.988 18:24:43 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:56.988 18:24:43 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:56.988 18:24:43 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:56.988 18:24:43 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:56.988 18:24:43 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:56.988 18:24:43 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:56.988 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:56.988 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:56.988 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:56.988 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:56.988 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:57.246 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:57.246 { 00:17:57.246 "name": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:17:57.246 "aliases": [ 00:17:57.246 "lvs/nvme0n1p0" 00:17:57.246 ], 00:17:57.246 "product_name": "Logical Volume", 00:17:57.246 "block_size": 4096, 00:17:57.246 "num_blocks": 26476544, 00:17:57.246 "uuid": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:17:57.246 "assigned_rate_limits": { 00:17:57.246 "rw_ios_per_sec": 0, 00:17:57.246 "rw_mbytes_per_sec": 0, 00:17:57.246 "r_mbytes_per_sec": 0, 00:17:57.246 "w_mbytes_per_sec": 0 00:17:57.246 }, 00:17:57.246 "claimed": false, 00:17:57.246 "zoned": false, 00:17:57.246 "supported_io_types": { 00:17:57.246 "read": true, 00:17:57.246 "write": true, 00:17:57.246 "unmap": true, 00:17:57.246 "flush": false, 00:17:57.246 "reset": true, 00:17:57.246 "nvme_admin": false, 00:17:57.246 "nvme_io": false, 00:17:57.246 "nvme_io_md": false, 00:17:57.246 "write_zeroes": true, 00:17:57.246 "zcopy": false, 00:17:57.246 "get_zone_info": false, 00:17:57.246 "zone_management": false, 00:17:57.246 "zone_append": false, 00:17:57.246 "compare": false, 00:17:57.246 "compare_and_write": false, 00:17:57.246 "abort": false, 00:17:57.246 "seek_hole": true, 00:17:57.246 "seek_data": true, 00:17:57.246 "copy": false, 00:17:57.246 "nvme_iov_md": false 00:17:57.246 }, 00:17:57.246 "driver_specific": { 00:17:57.246 "lvol": { 00:17:57.246 "lvol_store_uuid": "4b925f1c-7b4f-46d1-88e7-3c9b99f88dea", 00:17:57.246 "base_bdev": "nvme0n1", 00:17:57.246 "thin_provision": true, 00:17:57.246 "num_allocated_clusters": 0, 00:17:57.246 "snapshot": false, 00:17:57.246 "clone": false, 00:17:57.246 "esnap_clone": false 00:17:57.246 } 00:17:57.246 } 00:17:57.246 } 00:17:57.246 ]' 00:17:57.246 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:57.247 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:57.247 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:57.247 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:57.247 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:57.247 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:57.247 18:24:43 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:57.247 18:24:43 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:57.247 18:24:43 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:57.505 18:24:43 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:57.505 18:24:43 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:57.505 18:24:43 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:57.505 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:57.505 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:57.505 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:57.505 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:57.505 18:24:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:57.763 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:57.763 { 00:17:57.763 "name": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:17:57.763 "aliases": [ 00:17:57.763 "lvs/nvme0n1p0" 00:17:57.763 ], 00:17:57.763 "product_name": "Logical Volume", 00:17:57.763 "block_size": 4096, 00:17:57.763 "num_blocks": 26476544, 00:17:57.763 "uuid": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:17:57.763 "assigned_rate_limits": { 00:17:57.763 "rw_ios_per_sec": 0, 00:17:57.763 "rw_mbytes_per_sec": 0, 00:17:57.763 "r_mbytes_per_sec": 0, 00:17:57.763 "w_mbytes_per_sec": 0 00:17:57.763 }, 00:17:57.763 "claimed": false, 00:17:57.763 "zoned": false, 00:17:57.763 "supported_io_types": { 00:17:57.763 "read": true, 00:17:57.763 "write": true, 00:17:57.763 "unmap": true, 00:17:57.763 "flush": false, 00:17:57.763 "reset": true, 00:17:57.763 "nvme_admin": false, 00:17:57.763 "nvme_io": false, 00:17:57.763 "nvme_io_md": false, 00:17:57.763 "write_zeroes": true, 00:17:57.763 "zcopy": false, 00:17:57.763 "get_zone_info": false, 00:17:57.763 "zone_management": false, 00:17:57.763 "zone_append": false, 00:17:57.763 "compare": false, 00:17:57.763 "compare_and_write": false, 00:17:57.763 "abort": false, 00:17:57.763 "seek_hole": true, 00:17:57.763 "seek_data": true, 00:17:57.763 "copy": false, 00:17:57.763 "nvme_iov_md": false 00:17:57.763 }, 00:17:57.763 "driver_specific": { 00:17:57.763 "lvol": { 00:17:57.763 "lvol_store_uuid": "4b925f1c-7b4f-46d1-88e7-3c9b99f88dea", 00:17:57.763 "base_bdev": "nvme0n1", 00:17:57.763 "thin_provision": true, 00:17:57.763 "num_allocated_clusters": 0, 00:17:57.763 "snapshot": false, 00:17:57.763 "clone": false, 00:17:57.763 "esnap_clone": false 00:17:57.763 } 00:17:57.763 } 00:17:57.764 } 00:17:57.764 ]' 00:17:57.764 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:57.764 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:57.764 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:58.022 18:24:44 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:58.022 18:24:44 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:58.022 18:24:44 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:58.022 18:24:44 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:58.022 18:24:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:58.022 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da30fa89-2cb8-49c0-8fe7-d279dffe4429 00:17:58.280 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:58.280 { 00:17:58.280 "name": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:17:58.280 "aliases": [ 00:17:58.280 "lvs/nvme0n1p0" 00:17:58.280 ], 00:17:58.280 "product_name": "Logical Volume", 00:17:58.280 "block_size": 4096, 00:17:58.280 "num_blocks": 26476544, 00:17:58.280 "uuid": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:17:58.280 "assigned_rate_limits": { 00:17:58.281 "rw_ios_per_sec": 0, 00:17:58.281 "rw_mbytes_per_sec": 0, 00:17:58.281 "r_mbytes_per_sec": 0, 00:17:58.281 "w_mbytes_per_sec": 0 00:17:58.281 }, 00:17:58.281 "claimed": false, 00:17:58.281 "zoned": false, 00:17:58.281 "supported_io_types": { 00:17:58.281 "read": true, 00:17:58.281 "write": true, 00:17:58.281 "unmap": true, 00:17:58.281 "flush": false, 00:17:58.281 "reset": true, 00:17:58.281 "nvme_admin": false, 00:17:58.281 "nvme_io": false, 00:17:58.281 "nvme_io_md": false, 00:17:58.281 "write_zeroes": true, 00:17:58.281 "zcopy": false, 00:17:58.281 "get_zone_info": false, 00:17:58.281 "zone_management": false, 00:17:58.281 "zone_append": false, 00:17:58.281 "compare": false, 00:17:58.281 "compare_and_write": false, 00:17:58.281 "abort": false, 00:17:58.281 "seek_hole": true, 00:17:58.281 "seek_data": true, 00:17:58.281 "copy": false, 00:17:58.281 "nvme_iov_md": false 00:17:58.281 }, 00:17:58.281 "driver_specific": { 00:17:58.281 "lvol": { 00:17:58.281 "lvol_store_uuid": "4b925f1c-7b4f-46d1-88e7-3c9b99f88dea", 00:17:58.281 "base_bdev": "nvme0n1", 00:17:58.281 "thin_provision": true, 00:17:58.281 "num_allocated_clusters": 0, 00:17:58.281 "snapshot": false, 00:17:58.281 "clone": false, 00:17:58.281 "esnap_clone": false 00:17:58.281 } 00:17:58.281 } 00:17:58.281 } 00:17:58.281 ]' 00:17:58.540 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:58.540 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:58.540 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:58.540 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:58.540 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:58.540 18:24:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:58.540 18:24:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:58.540 18:24:44 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d da30fa89-2cb8-49c0-8fe7-d279dffe4429 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:58.799 [2024-07-11 18:24:45.004361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.799 [2024-07-11 18:24:45.004418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:58.799 [2024-07-11 18:24:45.004457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:58.799 [2024-07-11 18:24:45.004472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.799 [2024-07-11 18:24:45.007538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.799 [2024-07-11 18:24:45.007594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.799 [2024-07-11 18:24:45.007631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:17:58.799 [2024-07-11 18:24:45.007642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.799 [2024-07-11 18:24:45.007816] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:58.799 [2024-07-11 18:24:45.008178] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:58.799 [2024-07-11 18:24:45.008219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.799 [2024-07-11 18:24:45.008233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.799 [2024-07-11 18:24:45.008254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:17:58.799 [2024-07-11 18:24:45.008266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.799 [2024-07-11 18:24:45.008584] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:17:58.799 [2024-07-11 18:24:45.009631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.799 [2024-07-11 18:24:45.009690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:58.800 [2024-07-11 18:24:45.009706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:58.800 [2024-07-11 18:24:45.009720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.014339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.800 [2024-07-11 18:24:45.014403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.800 [2024-07-11 18:24:45.014436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.524 ms 00:17:58.800 [2024-07-11 18:24:45.014469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.014652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.800 [2024-07-11 18:24:45.014678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.800 [2024-07-11 18:24:45.014692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:58.800 [2024-07-11 18:24:45.014709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.014763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.800 [2024-07-11 18:24:45.014786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:58.800 [2024-07-11 18:24:45.014799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:58.800 [2024-07-11 18:24:45.014812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.014856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:58.800 [2024-07-11 18:24:45.016392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.800 [2024-07-11 18:24:45.016425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.800 [2024-07-11 18:24:45.016462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:17:58.800 [2024-07-11 18:24:45.016473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.016529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.800 [2024-07-11 18:24:45.016544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:58.800 [2024-07-11 18:24:45.016558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:58.800 [2024-07-11 18:24:45.016569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.016610] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:58.800 [2024-07-11 18:24:45.016815] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:58.800 [2024-07-11 18:24:45.016844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:58.800 [2024-07-11 18:24:45.016860] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:58.800 [2024-07-11 18:24:45.016878] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:58.800 [2024-07-11 18:24:45.016895] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:58.800 [2024-07-11 18:24:45.016908] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:58.800 [2024-07-11 18:24:45.016920] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:58.800 [2024-07-11 18:24:45.016949] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:58.800 [2024-07-11 18:24:45.016960] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:58.800 [2024-07-11 18:24:45.016977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.800 [2024-07-11 18:24:45.016988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:58.800 [2024-07-11 18:24:45.017002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:17:58.800 [2024-07-11 18:24:45.017013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.017136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.800 [2024-07-11 18:24:45.017153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:58.800 [2024-07-11 18:24:45.017169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:58.800 [2024-07-11 18:24:45.017181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.800 [2024-07-11 18:24:45.017312] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:58.800 [2024-07-11 18:24:45.017363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:58.800 [2024-07-11 18:24:45.017381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:58.800 [2024-07-11 18:24:45.017418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:58.800 [2024-07-11 18:24:45.017454] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.800 [2024-07-11 18:24:45.017479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:58.800 [2024-07-11 18:24:45.017489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:58.800 [2024-07-11 18:24:45.017501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.800 [2024-07-11 18:24:45.017512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:58.800 [2024-07-11 18:24:45.017526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:58.800 [2024-07-11 18:24:45.017536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:58.800 [2024-07-11 18:24:45.017559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:58.800 [2024-07-11 18:24:45.017594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:58.800 [2024-07-11 18:24:45.017626] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:58.800 [2024-07-11 18:24:45.017661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:58.800 [2024-07-11 18:24:45.017693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:58.800 [2024-07-11 18:24:45.017731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.800 [2024-07-11 18:24:45.017772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:58.800 [2024-07-11 18:24:45.017783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:58.800 [2024-07-11 18:24:45.017795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.800 [2024-07-11 18:24:45.017805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:58.800 [2024-07-11 18:24:45.017817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:58.800 [2024-07-11 18:24:45.017827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017839] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:58.800 [2024-07-11 18:24:45.017849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:58.800 [2024-07-11 18:24:45.017862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017872] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:58.800 [2024-07-11 18:24:45.017885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:58.800 [2024-07-11 18:24:45.017896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.800 [2024-07-11 18:24:45.017923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:58.800 [2024-07-11 18:24:45.017936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:58.800 [2024-07-11 18:24:45.017947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:58.800 [2024-07-11 18:24:45.017959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:58.800 [2024-07-11 18:24:45.017969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:58.800 [2024-07-11 18:24:45.017981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:58.800 [2024-07-11 18:24:45.017998] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:58.800 [2024-07-11 18:24:45.018015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.800 [2024-07-11 18:24:45.018028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:58.800 [2024-07-11 18:24:45.018041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:58.800 [2024-07-11 18:24:45.018052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:58.800 [2024-07-11 18:24:45.018066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:58.800 [2024-07-11 18:24:45.018089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:58.800 [2024-07-11 18:24:45.018105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:58.800 [2024-07-11 18:24:45.018117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:58.800 [2024-07-11 18:24:45.018132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:58.800 [2024-07-11 18:24:45.018144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:58.800 [2024-07-11 18:24:45.018157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:58.800 [2024-07-11 18:24:45.018169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:58.800 [2024-07-11 18:24:45.018182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:58.801 [2024-07-11 18:24:45.018193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:58.801 [2024-07-11 18:24:45.018207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:58.801 [2024-07-11 18:24:45.018218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:58.801 [2024-07-11 18:24:45.018232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.801 [2024-07-11 18:24:45.018247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:58.801 [2024-07-11 18:24:45.018260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:58.801 [2024-07-11 18:24:45.018272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:58.801 [2024-07-11 18:24:45.018286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:58.801 [2024-07-11 18:24:45.018298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.801 [2024-07-11 18:24:45.018312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:58.801 [2024-07-11 18:24:45.018323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:17:58.801 [2024-07-11 18:24:45.018340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.801 [2024-07-11 18:24:45.018456] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:58.801 [2024-07-11 18:24:45.018488] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:00.699 [2024-07-11 18:24:47.095861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.699 [2024-07-11 18:24:47.095943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:00.699 [2024-07-11 18:24:47.095980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2077.414 ms 00:18:00.699 [2024-07-11 18:24:47.095998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.699 [2024-07-11 18:24:47.103947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.699 [2024-07-11 18:24:47.103996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.699 [2024-07-11 18:24:47.104014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.734 ms 00:18:00.699 [2024-07-11 18:24:47.104048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.699 [2024-07-11 18:24:47.104267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.699 [2024-07-11 18:24:47.104297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:00.699 [2024-07-11 18:24:47.104311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:00.699 [2024-07-11 18:24:47.104328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.120889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.120961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.957 [2024-07-11 18:24:47.120997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.519 ms 00:18:00.957 [2024-07-11 18:24:47.121011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.121169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.121211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.957 [2024-07-11 18:24:47.121228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:00.957 [2024-07-11 18:24:47.121242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.121575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.121609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.957 [2024-07-11 18:24:47.121624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:00.957 [2024-07-11 18:24:47.121638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.121806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.121832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.957 [2024-07-11 18:24:47.121845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:00.957 [2024-07-11 18:24:47.121889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.127770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.127836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.957 [2024-07-11 18:24:47.127852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.840 ms 00:18:00.957 [2024-07-11 18:24:47.127866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.136904] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:00.957 [2024-07-11 18:24:47.150669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.150728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:00.957 [2024-07-11 18:24:47.150751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.659 ms 00:18:00.957 [2024-07-11 18:24:47.150763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.202252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.202330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:00.957 [2024-07-11 18:24:47.202379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.350 ms 00:18:00.957 [2024-07-11 18:24:47.202391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.202647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.202675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:00.957 [2024-07-11 18:24:47.202710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:18:00.957 [2024-07-11 18:24:47.202722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.206146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.206204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:00.957 [2024-07-11 18:24:47.206223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:18:00.957 [2024-07-11 18:24:47.206236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.209345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.209402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:00.957 [2024-07-11 18:24:47.209422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:18:00.957 [2024-07-11 18:24:47.209434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.209841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.209883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:00.957 [2024-07-11 18:24:47.209915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:18:00.957 [2024-07-11 18:24:47.209927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.240605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.240678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:00.957 [2024-07-11 18:24:47.240716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.632 ms 00:18:00.957 [2024-07-11 18:24:47.240729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.245061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.245144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:00.957 [2024-07-11 18:24:47.245182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:18:00.957 [2024-07-11 18:24:47.245194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.248830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.248884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:00.957 [2024-07-11 18:24:47.248923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.573 ms 00:18:00.957 [2024-07-11 18:24:47.248934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.957 [2024-07-11 18:24:47.252942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.957 [2024-07-11 18:24:47.252999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:00.957 [2024-07-11 18:24:47.253018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.945 ms 00:18:00.958 [2024-07-11 18:24:47.253031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.958 [2024-07-11 18:24:47.253135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.958 [2024-07-11 18:24:47.253156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:00.958 [2024-07-11 18:24:47.253171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:00.958 [2024-07-11 18:24:47.253198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.958 [2024-07-11 18:24:47.253290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.958 [2024-07-11 18:24:47.253324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:00.958 [2024-07-11 18:24:47.253340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:00.958 [2024-07-11 18:24:47.253352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.958 [2024-07-11 18:24:47.254359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.958 [2024-07-11 18:24:47.255606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2249.617 ms, result 0 00:18:00.958 [2024-07-11 18:24:47.256418] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:00.958 { 00:18:00.958 "name": "ftl0", 00:18:00.958 "uuid": "1b71244a-3e55-46f4-8731-7cb3003edfa4" 00:18:00.958 } 00:18:00.958 18:24:47 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:00.958 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:18:00.958 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:00.958 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:18:00.958 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:00.958 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:00.958 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:01.216 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:01.473 [ 00:18:01.473 { 00:18:01.473 "name": "ftl0", 00:18:01.473 "aliases": [ 00:18:01.473 "1b71244a-3e55-46f4-8731-7cb3003edfa4" 00:18:01.474 ], 00:18:01.474 "product_name": "FTL disk", 00:18:01.474 "block_size": 4096, 00:18:01.474 "num_blocks": 23592960, 00:18:01.474 "uuid": "1b71244a-3e55-46f4-8731-7cb3003edfa4", 00:18:01.474 "assigned_rate_limits": { 00:18:01.474 "rw_ios_per_sec": 0, 00:18:01.474 "rw_mbytes_per_sec": 0, 00:18:01.474 "r_mbytes_per_sec": 0, 00:18:01.474 "w_mbytes_per_sec": 0 00:18:01.474 }, 00:18:01.474 "claimed": false, 00:18:01.474 "zoned": false, 00:18:01.474 "supported_io_types": { 00:18:01.474 "read": true, 00:18:01.474 "write": true, 00:18:01.474 "unmap": true, 00:18:01.474 "flush": true, 00:18:01.474 "reset": false, 00:18:01.474 "nvme_admin": false, 00:18:01.474 "nvme_io": false, 00:18:01.474 "nvme_io_md": false, 00:18:01.474 "write_zeroes": true, 00:18:01.474 "zcopy": false, 00:18:01.474 "get_zone_info": false, 00:18:01.474 "zone_management": false, 00:18:01.474 "zone_append": false, 00:18:01.474 "compare": false, 00:18:01.474 "compare_and_write": false, 00:18:01.474 "abort": false, 00:18:01.474 "seek_hole": false, 00:18:01.474 "seek_data": false, 00:18:01.474 "copy": false, 00:18:01.474 "nvme_iov_md": false 00:18:01.474 }, 00:18:01.474 "driver_specific": { 00:18:01.474 "ftl": { 00:18:01.474 "base_bdev": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:18:01.474 "cache": "nvc0n1p0" 00:18:01.474 } 00:18:01.474 } 00:18:01.474 } 00:18:01.474 ] 00:18:01.474 18:24:47 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:18:01.474 18:24:47 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:01.474 18:24:47 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:01.731 18:24:48 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:01.731 18:24:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:01.990 18:24:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:01.990 { 00:18:01.990 "name": "ftl0", 00:18:01.990 "aliases": [ 00:18:01.990 "1b71244a-3e55-46f4-8731-7cb3003edfa4" 00:18:01.990 ], 00:18:01.990 "product_name": "FTL disk", 00:18:01.990 "block_size": 4096, 00:18:01.990 "num_blocks": 23592960, 00:18:01.990 "uuid": "1b71244a-3e55-46f4-8731-7cb3003edfa4", 00:18:01.990 "assigned_rate_limits": { 00:18:01.990 "rw_ios_per_sec": 0, 00:18:01.990 "rw_mbytes_per_sec": 0, 00:18:01.990 "r_mbytes_per_sec": 0, 00:18:01.990 "w_mbytes_per_sec": 0 00:18:01.990 }, 00:18:01.990 "claimed": false, 00:18:01.990 "zoned": false, 00:18:01.990 "supported_io_types": { 00:18:01.990 "read": true, 00:18:01.990 "write": true, 00:18:01.990 "unmap": true, 00:18:01.990 "flush": true, 00:18:01.990 "reset": false, 00:18:01.990 "nvme_admin": false, 00:18:01.990 "nvme_io": false, 00:18:01.990 "nvme_io_md": false, 00:18:01.990 "write_zeroes": true, 00:18:01.990 "zcopy": false, 00:18:01.990 "get_zone_info": false, 00:18:01.990 "zone_management": false, 00:18:01.990 "zone_append": false, 00:18:01.990 "compare": false, 00:18:01.990 "compare_and_write": false, 00:18:01.990 "abort": false, 00:18:01.990 "seek_hole": false, 00:18:01.990 "seek_data": false, 00:18:01.990 "copy": false, 00:18:01.990 "nvme_iov_md": false 00:18:01.990 }, 00:18:01.990 "driver_specific": { 00:18:01.990 "ftl": { 00:18:01.990 "base_bdev": "da30fa89-2cb8-49c0-8fe7-d279dffe4429", 00:18:01.990 "cache": "nvc0n1p0" 00:18:01.990 } 00:18:01.990 } 00:18:01.990 } 00:18:01.990 ]' 00:18:01.990 18:24:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:01.990 18:24:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:01.990 18:24:48 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:02.250 [2024-07-11 18:24:48.535686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.535764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:02.250 [2024-07-11 18:24:48.535801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:02.250 [2024-07-11 18:24:48.535814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.535860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:02.250 [2024-07-11 18:24:48.536323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.536364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:02.250 [2024-07-11 18:24:48.536382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:18:02.250 [2024-07-11 18:24:48.536394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.536972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.536996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:02.250 [2024-07-11 18:24:48.537028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:18:02.250 [2024-07-11 18:24:48.537040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.540792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.540837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:02.250 [2024-07-11 18:24:48.540870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:18:02.250 [2024-07-11 18:24:48.540898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.548384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.548455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:02.250 [2024-07-11 18:24:48.548506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.405 ms 00:18:02.250 [2024-07-11 18:24:48.548518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.550144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.550241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:02.250 [2024-07-11 18:24:48.550260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:18:02.250 [2024-07-11 18:24:48.550271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.554194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.554250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:02.250 [2024-07-11 18:24:48.554296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.867 ms 00:18:02.250 [2024-07-11 18:24:48.554309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.554516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.554540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:02.250 [2024-07-11 18:24:48.554572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:02.250 [2024-07-11 18:24:48.554594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.556214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.556250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:02.250 [2024-07-11 18:24:48.556267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:18:02.250 [2024-07-11 18:24:48.556278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.557716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.557784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:02.250 [2024-07-11 18:24:48.557801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:18:02.250 [2024-07-11 18:24:48.557812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.558900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.558971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:02.250 [2024-07-11 18:24:48.558989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:18:02.250 [2024-07-11 18:24:48.559000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.560169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.250 [2024-07-11 18:24:48.560205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:02.250 [2024-07-11 18:24:48.560312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:18:02.250 [2024-07-11 18:24:48.560326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.250 [2024-07-11 18:24:48.560477] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:02.250 [2024-07-11 18:24:48.560529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.560999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.561012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.561024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.561037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.561049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:02.250 [2024-07-11 18:24:48.561062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:02.251 [2024-07-11 18:24:48.561861] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:02.251 [2024-07-11 18:24:48.561874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:18:02.251 [2024-07-11 18:24:48.561887] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:02.251 [2024-07-11 18:24:48.561899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:02.251 [2024-07-11 18:24:48.561927] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:02.251 [2024-07-11 18:24:48.561941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:02.251 [2024-07-11 18:24:48.561951] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:02.251 [2024-07-11 18:24:48.561965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:02.251 [2024-07-11 18:24:48.561976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:02.251 [2024-07-11 18:24:48.561988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:02.251 [2024-07-11 18:24:48.561998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:02.251 [2024-07-11 18:24:48.562011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.251 [2024-07-11 18:24:48.562023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:02.251 [2024-07-11 18:24:48.562037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:18:02.251 [2024-07-11 18:24:48.562048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.563649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.251 [2024-07-11 18:24:48.563698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:02.251 [2024-07-11 18:24:48.563715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.528 ms 00:18:02.251 [2024-07-11 18:24:48.563727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.563820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.251 [2024-07-11 18:24:48.563850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:02.251 [2024-07-11 18:24:48.563865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:02.251 [2024-07-11 18:24:48.563876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.569281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.251 [2024-07-11 18:24:48.569322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.251 [2024-07-11 18:24:48.569340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.251 [2024-07-11 18:24:48.569352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.569480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.251 [2024-07-11 18:24:48.569498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.251 [2024-07-11 18:24:48.569528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.251 [2024-07-11 18:24:48.569540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.569627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.251 [2024-07-11 18:24:48.569648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.251 [2024-07-11 18:24:48.569663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.251 [2024-07-11 18:24:48.569674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.569717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.251 [2024-07-11 18:24:48.569732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.251 [2024-07-11 18:24:48.569745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.251 [2024-07-11 18:24:48.569757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.578348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.251 [2024-07-11 18:24:48.578432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.251 [2024-07-11 18:24:48.578454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.251 [2024-07-11 18:24:48.578466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.585051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.251 [2024-07-11 18:24:48.585128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.251 [2024-07-11 18:24:48.585164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.251 [2024-07-11 18:24:48.585177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.251 [2024-07-11 18:24:48.585285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.252 [2024-07-11 18:24:48.585303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.252 [2024-07-11 18:24:48.585321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.252 [2024-07-11 18:24:48.585332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.252 [2024-07-11 18:24:48.585410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.252 [2024-07-11 18:24:48.585425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.252 [2024-07-11 18:24:48.585439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.252 [2024-07-11 18:24:48.585450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.252 [2024-07-11 18:24:48.585571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.252 [2024-07-11 18:24:48.585603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.252 [2024-07-11 18:24:48.585619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.252 [2024-07-11 18:24:48.585633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.252 [2024-07-11 18:24:48.585715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.252 [2024-07-11 18:24:48.585733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:02.252 [2024-07-11 18:24:48.585748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.252 [2024-07-11 18:24:48.585759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.252 [2024-07-11 18:24:48.585824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.252 [2024-07-11 18:24:48.585840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.252 [2024-07-11 18:24:48.585873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.252 [2024-07-11 18:24:48.585887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.252 [2024-07-11 18:24:48.585959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.252 [2024-07-11 18:24:48.585976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.252 [2024-07-11 18:24:48.585990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.252 [2024-07-11 18:24:48.586001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.252 [2024-07-11 18:24:48.586251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.515 ms, result 0 00:18:02.252 true 00:18:02.252 18:24:48 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 90516 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90516 ']' 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90516 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90516 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:02.252 killing process with pid 90516 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90516' 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 90516 00:18:02.252 18:24:48 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 90516 00:18:05.543 18:24:51 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:06.480 65536+0 records in 00:18:06.480 65536+0 records out 00:18:06.480 268435456 bytes (268 MB, 256 MiB) copied, 1.11794 s, 240 MB/s 00:18:06.480 18:24:52 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:06.480 [2024-07-11 18:24:52.784462] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:06.480 [2024-07-11 18:24:52.784618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90686 ] 00:18:06.739 [2024-07-11 18:24:52.929609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.739 [2024-07-11 18:24:52.971909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.739 [2024-07-11 18:24:53.062152] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.739 [2024-07-11 18:24:53.062274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:06.998 [2024-07-11 18:24:53.221046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.221127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.998 [2024-07-11 18:24:53.221173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:06.998 [2024-07-11 18:24:53.221184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.223699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.223755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.998 [2024-07-11 18:24:53.223786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:18:06.998 [2024-07-11 18:24:53.223808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.223919] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.998 [2024-07-11 18:24:53.224226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.998 [2024-07-11 18:24:53.224265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.224279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.998 [2024-07-11 18:24:53.224317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:18:06.998 [2024-07-11 18:24:53.224343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.225679] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:06.998 [2024-07-11 18:24:53.227876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.227917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:06.998 [2024-07-11 18:24:53.227932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:18:06.998 [2024-07-11 18:24:53.227944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.228024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.228043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:06.998 [2024-07-11 18:24:53.228081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:06.998 [2024-07-11 18:24:53.228109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.232302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.232356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.998 [2024-07-11 18:24:53.232386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.132 ms 00:18:06.998 [2024-07-11 18:24:53.232396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.232529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.232549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.998 [2024-07-11 18:24:53.232564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:06.998 [2024-07-11 18:24:53.232577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.232647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.232662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.998 [2024-07-11 18:24:53.232673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:06.998 [2024-07-11 18:24:53.232684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.232713] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:06.998 [2024-07-11 18:24:53.234038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.234123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.998 [2024-07-11 18:24:53.234140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:18:06.998 [2024-07-11 18:24:53.234150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.234215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.998 [2024-07-11 18:24:53.234232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.998 [2024-07-11 18:24:53.234255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:06.998 [2024-07-11 18:24:53.234265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.998 [2024-07-11 18:24:53.234295] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:06.998 [2024-07-11 18:24:53.234336] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:06.998 [2024-07-11 18:24:53.234397] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:06.999 [2024-07-11 18:24:53.234420] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:06.999 [2024-07-11 18:24:53.234535] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:06.999 [2024-07-11 18:24:53.234550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.999 [2024-07-11 18:24:53.234576] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:06.999 [2024-07-11 18:24:53.234613] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.999 [2024-07-11 18:24:53.234627] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.999 [2024-07-11 18:24:53.234639] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:06.999 [2024-07-11 18:24:53.234650] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.999 [2024-07-11 18:24:53.234667] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:06.999 [2024-07-11 18:24:53.234692] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:06.999 [2024-07-11 18:24:53.234712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.999 [2024-07-11 18:24:53.234724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.999 [2024-07-11 18:24:53.234735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:18:06.999 [2024-07-11 18:24:53.234761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.999 [2024-07-11 18:24:53.234868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.999 [2024-07-11 18:24:53.234884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.999 [2024-07-11 18:24:53.234937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:06.999 [2024-07-11 18:24:53.234948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.999 [2024-07-11 18:24:53.235055] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.999 [2024-07-11 18:24:53.235072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.999 [2024-07-11 18:24:53.235094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.999 [2024-07-11 18:24:53.235142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.999 [2024-07-11 18:24:53.235175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235185] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.999 [2024-07-11 18:24:53.235194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.999 [2024-07-11 18:24:53.235209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:06.999 [2024-07-11 18:24:53.235219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.999 [2024-07-11 18:24:53.235229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.999 [2024-07-11 18:24:53.235239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:06.999 [2024-07-11 18:24:53.235248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.999 [2024-07-11 18:24:53.235267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235276] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.999 [2024-07-11 18:24:53.235296] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.999 [2024-07-11 18:24:53.235324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235333] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.999 [2024-07-11 18:24:53.235352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.999 [2024-07-11 18:24:53.235386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.999 [2024-07-11 18:24:53.235414] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.999 [2024-07-11 18:24:53.235433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.999 [2024-07-11 18:24:53.235442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:06.999 [2024-07-11 18:24:53.235451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.999 [2024-07-11 18:24:53.235461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:06.999 [2024-07-11 18:24:53.235470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:06.999 [2024-07-11 18:24:53.235480] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:06.999 [2024-07-11 18:24:53.235499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:06.999 [2024-07-11 18:24:53.235508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235520] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.999 [2024-07-11 18:24:53.235531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.999 [2024-07-11 18:24:53.235550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235561] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.999 [2024-07-11 18:24:53.235572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.999 [2024-07-11 18:24:53.235582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.999 [2024-07-11 18:24:53.235592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.999 [2024-07-11 18:24:53.235601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.999 [2024-07-11 18:24:53.235610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.999 [2024-07-11 18:24:53.235620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.999 [2024-07-11 18:24:53.235631] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.999 [2024-07-11 18:24:53.235644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.999 [2024-07-11 18:24:53.235660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:06.999 [2024-07-11 18:24:53.235671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:06.999 [2024-07-11 18:24:53.235681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:06.999 [2024-07-11 18:24:53.235692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:06.999 [2024-07-11 18:24:53.235705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:06.999 [2024-07-11 18:24:53.235716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:06.999 [2024-07-11 18:24:53.235726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:06.999 [2024-07-11 18:24:53.235736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:06.999 [2024-07-11 18:24:53.235747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:06.999 [2024-07-11 18:24:53.235757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:06.999 [2024-07-11 18:24:53.235767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:06.999 [2024-07-11 18:24:53.235777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:06.999 [2024-07-11 18:24:53.235788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:06.999 [2024-07-11 18:24:53.235798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:06.999 [2024-07-11 18:24:53.235820] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.999 [2024-07-11 18:24:53.235831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.999 [2024-07-11 18:24:53.235843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:07.000 [2024-07-11 18:24:53.235854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:07.000 [2024-07-11 18:24:53.235865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:07.000 [2024-07-11 18:24:53.235876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:07.000 [2024-07-11 18:24:53.235891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.235903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:07.000 [2024-07-11 18:24:53.235922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:18:07.000 [2024-07-11 18:24:53.235933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.251823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.251897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:07.000 [2024-07-11 18:24:53.251933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.802 ms 00:18:07.000 [2024-07-11 18:24:53.251948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.252137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.252158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:07.000 [2024-07-11 18:24:53.252186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:07.000 [2024-07-11 18:24:53.252212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.259592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.259650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:07.000 [2024-07-11 18:24:53.259682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.349 ms 00:18:07.000 [2024-07-11 18:24:53.259698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.259762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.259778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:07.000 [2024-07-11 18:24:53.259789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:07.000 [2024-07-11 18:24:53.259799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.260184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.260211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:07.000 [2024-07-11 18:24:53.260235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:18:07.000 [2024-07-11 18:24:53.260247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.260402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.260426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:07.000 [2024-07-11 18:24:53.260447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:18:07.000 [2024-07-11 18:24:53.260458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.265264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.265318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:07.000 [2024-07-11 18:24:53.265333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:18:07.000 [2024-07-11 18:24:53.265343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.267697] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:07.000 [2024-07-11 18:24:53.267757] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:07.000 [2024-07-11 18:24:53.267791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.267802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:07.000 [2024-07-11 18:24:53.267813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.299 ms 00:18:07.000 [2024-07-11 18:24:53.267823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.281654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.281723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:07.000 [2024-07-11 18:24:53.281761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.777 ms 00:18:07.000 [2024-07-11 18:24:53.281773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.283748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.283799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:07.000 [2024-07-11 18:24:53.283828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.889 ms 00:18:07.000 [2024-07-11 18:24:53.283837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.285485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.285519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:07.000 [2024-07-11 18:24:53.285548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:18:07.000 [2024-07-11 18:24:53.285558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.285957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.285995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:07.000 [2024-07-11 18:24:53.286009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:18:07.000 [2024-07-11 18:24:53.286030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.301624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.301720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:07.000 [2024-07-11 18:24:53.301764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.509 ms 00:18:07.000 [2024-07-11 18:24:53.301776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.309329] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:07.000 [2024-07-11 18:24:53.321981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.322054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:07.000 [2024-07-11 18:24:53.322088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.102 ms 00:18:07.000 [2024-07-11 18:24:53.322108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.322252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.322271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:07.000 [2024-07-11 18:24:53.322288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:07.000 [2024-07-11 18:24:53.322298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.322379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.322425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:07.000 [2024-07-11 18:24:53.322438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:07.000 [2024-07-11 18:24:53.322449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.322482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.322510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:07.000 [2024-07-11 18:24:53.322522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:07.000 [2024-07-11 18:24:53.322537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.322597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:07.000 [2024-07-11 18:24:53.322631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.322643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:07.000 [2024-07-11 18:24:53.322654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:07.000 [2024-07-11 18:24:53.322665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.326476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.326530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:07.000 [2024-07-11 18:24:53.326563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:18:07.000 [2024-07-11 18:24:53.326575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.326716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.000 [2024-07-11 18:24:53.326736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:07.000 [2024-07-11 18:24:53.326749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:07.000 [2024-07-11 18:24:53.326760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.000 [2024-07-11 18:24:53.327822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:07.000 [2024-07-11 18:24:53.329181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.402 ms, result 0 00:18:07.000 [2024-07-11 18:24:53.330064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:07.000 [2024-07-11 18:24:53.339405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:17.848  Copying: 23/256 [MB] (23 MBps) Copying: 47/256 [MB] (23 MBps) Copying: 70/256 [MB] (23 MBps) Copying: 93/256 [MB] (22 MBps) Copying: 117/256 [MB] (24 MBps) Copying: 141/256 [MB] (24 MBps) Copying: 165/256 [MB] (23 MBps) Copying: 189/256 [MB] (23 MBps) Copying: 212/256 [MB] (23 MBps) Copying: 235/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-11 18:25:04.185594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.848 [2024-07-11 18:25:04.186683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.186718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.848 [2024-07-11 18:25:04.186736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.848 [2024-07-11 18:25:04.186748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.186777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:17.848 [2024-07-11 18:25:04.187209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.187238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.848 [2024-07-11 18:25:04.187252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:18:17.848 [2024-07-11 18:25:04.187263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.188846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.188905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.848 [2024-07-11 18:25:04.188925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:18:17.848 [2024-07-11 18:25:04.188936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.195696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.195766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.848 [2024-07-11 18:25:04.195782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.737 ms 00:18:17.848 [2024-07-11 18:25:04.195793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.202946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.203011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.848 [2024-07-11 18:25:04.203025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.076 ms 00:18:17.848 [2024-07-11 18:25:04.203042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.204392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.204446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.848 [2024-07-11 18:25:04.204491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:18:17.848 [2024-07-11 18:25:04.204516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.207684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.207754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.848 [2024-07-11 18:25:04.207769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.130 ms 00:18:17.848 [2024-07-11 18:25:04.207780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.207906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.207924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.848 [2024-07-11 18:25:04.207940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:17.848 [2024-07-11 18:25:04.207950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.209870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.209955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:17.848 [2024-07-11 18:25:04.209969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.899 ms 00:18:17.848 [2024-07-11 18:25:04.209979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.211454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.211491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:17.848 [2024-07-11 18:25:04.211541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:18:17.848 [2024-07-11 18:25:04.211562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.212789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.212839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.848 [2024-07-11 18:25:04.212870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:18:17.848 [2024-07-11 18:25:04.212880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.213979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.848 [2024-07-11 18:25:04.214047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.848 [2024-07-11 18:25:04.214061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:18:17.848 [2024-07-11 18:25:04.214072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.848 [2024-07-11 18:25:04.214126] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.848 [2024-07-11 18:25:04.214150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.848 [2024-07-11 18:25:04.214359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.214994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.849 [2024-07-11 18:25:04.215384] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.849 [2024-07-11 18:25:04.215395] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:18:17.849 [2024-07-11 18:25:04.215407] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.849 [2024-07-11 18:25:04.215418] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.849 [2024-07-11 18:25:04.215428] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.849 [2024-07-11 18:25:04.215451] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.849 [2024-07-11 18:25:04.215462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.849 [2024-07-11 18:25:04.215478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.849 [2024-07-11 18:25:04.215489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.849 [2024-07-11 18:25:04.215499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.849 [2024-07-11 18:25:04.215509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.849 [2024-07-11 18:25:04.215520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.849 [2024-07-11 18:25:04.215531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.849 [2024-07-11 18:25:04.215547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:18:17.849 [2024-07-11 18:25:04.215558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.849 [2024-07-11 18:25:04.216923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.850 [2024-07-11 18:25:04.216970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.850 [2024-07-11 18:25:04.216984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:18:17.850 [2024-07-11 18:25:04.217026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.217143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.850 [2024-07-11 18:25:04.217161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.850 [2024-07-11 18:25:04.217174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:17.850 [2024-07-11 18:25:04.217184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.221806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.221859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.850 [2024-07-11 18:25:04.221873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.221889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.221959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.221973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.850 [2024-07-11 18:25:04.221984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.221994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.222043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.222059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.850 [2024-07-11 18:25:04.222075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.222085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.222163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.222193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.850 [2024-07-11 18:25:04.222205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.222216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.229940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.230014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.850 [2024-07-11 18:25:04.230030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.230046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.236372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.236440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.850 [2024-07-11 18:25:04.236472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.236483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.236539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.236553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.850 [2024-07-11 18:25:04.236563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.236587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.236627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.236651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.850 [2024-07-11 18:25:04.236662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.236672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.236795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.236813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.850 [2024-07-11 18:25:04.236825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.236835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.236881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.236902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.850 [2024-07-11 18:25:04.236914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.236924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.236970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.236984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.850 [2024-07-11 18:25:04.236995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.237004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.237056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.850 [2024-07-11 18:25:04.237078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.850 [2024-07-11 18:25:04.237107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.850 [2024-07-11 18:25:04.237118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.850 [2024-07-11 18:25:04.237355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.645 ms, result 0 00:18:18.417 00:18:18.417 00:18:18.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.417 18:25:04 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=90805 00:18:18.417 18:25:04 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 90805 00:18:18.417 18:25:04 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:18.417 18:25:04 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 90805 ']' 00:18:18.417 18:25:04 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.417 18:25:04 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.417 18:25:04 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.417 18:25:04 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.417 18:25:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:18.417 [2024-07-11 18:25:04.683203] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:18.417 [2024-07-11 18:25:04.683389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90805 ] 00:18:18.417 [2024-07-11 18:25:04.830201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.675 [2024-07-11 18:25:04.865808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.243 18:25:05 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.243 18:25:05 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:18:19.243 18:25:05 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:19.502 [2024-07-11 18:25:05.825948] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.502 [2024-07-11 18:25:05.826045] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.763 [2024-07-11 18:25:05.998669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:05.998741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.763 [2024-07-11 18:25:05.998778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:19.763 [2024-07-11 18:25:05.998790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.001268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.001322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.763 [2024-07-11 18:25:06.001358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.451 ms 00:18:19.763 [2024-07-11 18:25:06.001369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.001493] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.763 [2024-07-11 18:25:06.001801] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.763 [2024-07-11 18:25:06.001839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.001853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.763 [2024-07-11 18:25:06.001867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:18:19.763 [2024-07-11 18:25:06.001878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.003375] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:19.763 [2024-07-11 18:25:06.005461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.005522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:19.763 [2024-07-11 18:25:06.005539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:18:19.763 [2024-07-11 18:25:06.005553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.005638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.005668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:19.763 [2024-07-11 18:25:06.005681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:19.763 [2024-07-11 18:25:06.005696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.009970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.010029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.763 [2024-07-11 18:25:06.010060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:18:19.763 [2024-07-11 18:25:06.010073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.010249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.010274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.763 [2024-07-11 18:25:06.010312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:19.763 [2024-07-11 18:25:06.010330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.010372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.010398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:19.763 [2024-07-11 18:25:06.010411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:19.763 [2024-07-11 18:25:06.010423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.010470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:19.763 [2024-07-11 18:25:06.011855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.011907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.763 [2024-07-11 18:25:06.011929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:18:19.763 [2024-07-11 18:25:06.011940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.011996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.763 [2024-07-11 18:25:06.012019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:19.763 [2024-07-11 18:25:06.012033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:19.763 [2024-07-11 18:25:06.012044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.763 [2024-07-11 18:25:06.012074] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:19.763 [2024-07-11 18:25:06.012132] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:19.763 [2024-07-11 18:25:06.012193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:19.763 [2024-07-11 18:25:06.012245] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:19.763 [2024-07-11 18:25:06.012354] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:19.763 [2024-07-11 18:25:06.012370] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:19.763 [2024-07-11 18:25:06.012387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:19.763 [2024-07-11 18:25:06.012402] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:19.764 [2024-07-11 18:25:06.012427] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:19.764 [2024-07-11 18:25:06.012440] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:19.764 [2024-07-11 18:25:06.012456] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:19.764 [2024-07-11 18:25:06.012483] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:19.764 [2024-07-11 18:25:06.012498] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:19.764 [2024-07-11 18:25:06.012510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.764 [2024-07-11 18:25:06.012523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:19.764 [2024-07-11 18:25:06.012534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:18:19.764 [2024-07-11 18:25:06.012547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.764 [2024-07-11 18:25:06.012652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.764 [2024-07-11 18:25:06.012670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:19.764 [2024-07-11 18:25:06.012684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:19.764 [2024-07-11 18:25:06.012697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.764 [2024-07-11 18:25:06.012802] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:19.764 [2024-07-11 18:25:06.012845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:19.764 [2024-07-11 18:25:06.012865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.764 [2024-07-11 18:25:06.012878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.764 [2024-07-11 18:25:06.012890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:19.764 [2024-07-11 18:25:06.012905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:19.764 [2024-07-11 18:25:06.012915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:19.764 [2024-07-11 18:25:06.012928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:19.764 [2024-07-11 18:25:06.012940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:19.764 [2024-07-11 18:25:06.012970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.764 [2024-07-11 18:25:06.012981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:19.764 [2024-07-11 18:25:06.012993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:19.764 [2024-07-11 18:25:06.013003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.764 [2024-07-11 18:25:06.013016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:19.764 [2024-07-11 18:25:06.013027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:19.764 [2024-07-11 18:25:06.013039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:19.764 [2024-07-11 18:25:06.013062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:19.764 [2024-07-11 18:25:06.013072] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:19.764 [2024-07-11 18:25:06.013139] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.764 [2024-07-11 18:25:06.013170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:19.764 [2024-07-11 18:25:06.013185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.764 [2024-07-11 18:25:06.013209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:19.764 [2024-07-11 18:25:06.013220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.764 [2024-07-11 18:25:06.013243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:19.764 [2024-07-11 18:25:06.013272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.764 [2024-07-11 18:25:06.013294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:19.764 [2024-07-11 18:25:06.013305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.764 [2024-07-11 18:25:06.013328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:19.764 [2024-07-11 18:25:06.013341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:19.764 [2024-07-11 18:25:06.013351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.764 [2024-07-11 18:25:06.013365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:19.764 [2024-07-11 18:25:06.013376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:19.764 [2024-07-11 18:25:06.013388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:19.764 [2024-07-11 18:25:06.013412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:19.764 [2024-07-11 18:25:06.013423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013434] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:19.764 [2024-07-11 18:25:06.013446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:19.764 [2024-07-11 18:25:06.013474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.764 [2024-07-11 18:25:06.013487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.764 [2024-07-11 18:25:06.013500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:19.764 [2024-07-11 18:25:06.013511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:19.764 [2024-07-11 18:25:06.013523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:19.764 [2024-07-11 18:25:06.013534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:19.764 [2024-07-11 18:25:06.013547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:19.764 [2024-07-11 18:25:06.013557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:19.764 [2024-07-11 18:25:06.013573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:19.764 [2024-07-11 18:25:06.013587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.764 [2024-07-11 18:25:06.013601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:19.764 [2024-07-11 18:25:06.013612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:19.764 [2024-07-11 18:25:06.013625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:19.764 [2024-07-11 18:25:06.013636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:19.764 [2024-07-11 18:25:06.013648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:19.764 [2024-07-11 18:25:06.013659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:19.764 [2024-07-11 18:25:06.013672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:19.764 [2024-07-11 18:25:06.013683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:19.764 [2024-07-11 18:25:06.013696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:19.764 [2024-07-11 18:25:06.013707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:19.764 [2024-07-11 18:25:06.013719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:19.764 [2024-07-11 18:25:06.013730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:19.764 [2024-07-11 18:25:06.013743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:19.764 [2024-07-11 18:25:06.013754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:19.764 [2024-07-11 18:25:06.013769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:19.764 [2024-07-11 18:25:06.013783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.764 [2024-07-11 18:25:06.013797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:19.764 [2024-07-11 18:25:06.013809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:19.764 [2024-07-11 18:25:06.013822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:19.764 [2024-07-11 18:25:06.013833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:19.764 [2024-07-11 18:25:06.013848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.764 [2024-07-11 18:25:06.013860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:19.764 [2024-07-11 18:25:06.013873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:18:19.764 [2024-07-11 18:25:06.013894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.764 [2024-07-11 18:25:06.022161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.764 [2024-07-11 18:25:06.022229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.764 [2024-07-11 18:25:06.022265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.176 ms 00:18:19.764 [2024-07-11 18:25:06.022277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.764 [2024-07-11 18:25:06.022477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.764 [2024-07-11 18:25:06.022497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:19.764 [2024-07-11 18:25:06.022543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:19.764 [2024-07-11 18:25:06.022572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.764 [2024-07-11 18:25:06.030701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.764 [2024-07-11 18:25:06.030763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.764 [2024-07-11 18:25:06.030798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.095 ms 00:18:19.764 [2024-07-11 18:25:06.030811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.764 [2024-07-11 18:25:06.030910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.764 [2024-07-11 18:25:06.030960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.765 [2024-07-11 18:25:06.030974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:19.765 [2024-07-11 18:25:06.030985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.031384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.031413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.765 [2024-07-11 18:25:06.031430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:18:19.765 [2024-07-11 18:25:06.031442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.031627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.031657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.765 [2024-07-11 18:25:06.031675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:18:19.765 [2024-07-11 18:25:06.031687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.037287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.037344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.765 [2024-07-11 18:25:06.037363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.569 ms 00:18:19.765 [2024-07-11 18:25:06.037375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.039787] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:19.765 [2024-07-11 18:25:06.039843] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:19.765 [2024-07-11 18:25:06.039879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.039891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:19.765 [2024-07-11 18:25:06.039905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.374 ms 00:18:19.765 [2024-07-11 18:25:06.039916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.054150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.054206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:19.765 [2024-07-11 18:25:06.054240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.180 ms 00:18:19.765 [2024-07-11 18:25:06.054252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.056146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.056226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:19.765 [2024-07-11 18:25:06.056259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:18:19.765 [2024-07-11 18:25:06.056270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.057922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.057975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:19.765 [2024-07-11 18:25:06.057992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:18:19.765 [2024-07-11 18:25:06.058003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.058429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.058473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:19.765 [2024-07-11 18:25:06.058490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:18:19.765 [2024-07-11 18:25:06.058502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.087296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.087383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:19.765 [2024-07-11 18:25:06.087421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.740 ms 00:18:19.765 [2024-07-11 18:25:06.087433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.095334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:19.765 [2024-07-11 18:25:06.108688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.108793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:19.765 [2024-07-11 18:25:06.108813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.121 ms 00:18:19.765 [2024-07-11 18:25:06.108827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.108965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.108987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:19.765 [2024-07-11 18:25:06.109003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:19.765 [2024-07-11 18:25:06.109016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.109100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.109195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:19.765 [2024-07-11 18:25:06.109210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:19.765 [2024-07-11 18:25:06.109224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.109260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.109278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:19.765 [2024-07-11 18:25:06.109291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:19.765 [2024-07-11 18:25:06.109313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.109355] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:19.765 [2024-07-11 18:25:06.109376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.109389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:19.765 [2024-07-11 18:25:06.109409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:19.765 [2024-07-11 18:25:06.109420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.113059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.113131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:19.765 [2024-07-11 18:25:06.113151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.603 ms 00:18:19.765 [2024-07-11 18:25:06.113163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.113306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.765 [2024-07-11 18:25:06.113327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:19.765 [2024-07-11 18:25:06.113344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:19.765 [2024-07-11 18:25:06.113357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.765 [2024-07-11 18:25:06.114664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.765 [2024-07-11 18:25:06.115860] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.612 ms, result 0 00:18:19.765 [2024-07-11 18:25:06.116821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:19.765 Some configs were skipped because the RPC state that can call them passed over. 00:18:19.765 18:25:06 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:20.024 [2024-07-11 18:25:06.383884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.024 [2024-07-11 18:25:06.383999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:20.024 [2024-07-11 18:25:06.384036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.658 ms 00:18:20.024 [2024-07-11 18:25:06.384050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.024 [2024-07-11 18:25:06.384113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.880 ms, result 0 00:18:20.024 true 00:18:20.024 18:25:06 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:20.283 [2024-07-11 18:25:06.599764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.283 [2024-07-11 18:25:06.599838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:20.283 [2024-07-11 18:25:06.599891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:18:20.283 [2024-07-11 18:25:06.599903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.283 [2024-07-11 18:25:06.599957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.485 ms, result 0 00:18:20.283 true 00:18:20.283 18:25:06 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 90805 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90805 ']' 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90805 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90805 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.283 killing process with pid 90805 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90805' 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 90805 00:18:20.283 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 90805 00:18:20.544 [2024-07-11 18:25:06.751518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.751612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:20.544 [2024-07-11 18:25:06.751649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:20.544 [2024-07-11 18:25:06.751663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.751695] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:20.544 [2024-07-11 18:25:06.752170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.752212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:20.544 [2024-07-11 18:25:06.752229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:18:20.544 [2024-07-11 18:25:06.752242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.752553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.752594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:20.544 [2024-07-11 18:25:06.752610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:18:20.544 [2024-07-11 18:25:06.752630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.756656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.756698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:20.544 [2024-07-11 18:25:06.756717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.996 ms 00:18:20.544 [2024-07-11 18:25:06.756729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.763555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.763606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:20.544 [2024-07-11 18:25:06.763640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.772 ms 00:18:20.544 [2024-07-11 18:25:06.763651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.764942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.764998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:20.544 [2024-07-11 18:25:06.765016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:18:20.544 [2024-07-11 18:25:06.765027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.768268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.768325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:20.544 [2024-07-11 18:25:06.768343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:18:20.544 [2024-07-11 18:25:06.768358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.768527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.768555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:20.544 [2024-07-11 18:25:06.768572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:20.544 [2024-07-11 18:25:06.768599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.770416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.770502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:20.544 [2024-07-11 18:25:06.770536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.786 ms 00:18:20.544 [2024-07-11 18:25:06.770547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.772232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.772287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:20.544 [2024-07-11 18:25:06.772320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:18:20.544 [2024-07-11 18:25:06.772331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.773595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.773679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:20.544 [2024-07-11 18:25:06.773695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:18:20.544 [2024-07-11 18:25:06.773706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.774983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.544 [2024-07-11 18:25:06.775068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:20.544 [2024-07-11 18:25:06.775126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:18:20.544 [2024-07-11 18:25:06.775138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.544 [2024-07-11 18:25:06.775183] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:20.544 [2024-07-11 18:25:06.775205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:20.544 [2024-07-11 18:25:06.775910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.775923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.775934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.775947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.775959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.775972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.775983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.775996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:20.545 [2024-07-11 18:25:06.776568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:20.545 [2024-07-11 18:25:06.776596] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:18:20.545 [2024-07-11 18:25:06.776608] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:20.545 [2024-07-11 18:25:06.776630] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:20.545 [2024-07-11 18:25:06.776644] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:20.545 [2024-07-11 18:25:06.776656] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:20.545 [2024-07-11 18:25:06.776667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:20.545 [2024-07-11 18:25:06.776679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:20.545 [2024-07-11 18:25:06.776690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:20.545 [2024-07-11 18:25:06.776702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:20.545 [2024-07-11 18:25:06.776712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:20.545 [2024-07-11 18:25:06.776726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.545 [2024-07-11 18:25:06.776737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:20.545 [2024-07-11 18:25:06.776750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:18:20.545 [2024-07-11 18:25:06.776761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.545 [2024-07-11 18:25:06.778229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.545 [2024-07-11 18:25:06.778276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:20.545 [2024-07-11 18:25:06.778293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:18:20.545 [2024-07-11 18:25:06.778304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.545 [2024-07-11 18:25:06.778388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.545 [2024-07-11 18:25:06.778403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:20.545 [2024-07-11 18:25:06.778417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:20.545 [2024-07-11 18:25:06.778427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.545 [2024-07-11 18:25:06.783594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.545 [2024-07-11 18:25:06.783655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.545 [2024-07-11 18:25:06.783682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.545 [2024-07-11 18:25:06.783694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.545 [2024-07-11 18:25:06.783795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.545 [2024-07-11 18:25:06.783812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.545 [2024-07-11 18:25:06.783826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.545 [2024-07-11 18:25:06.783837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.545 [2024-07-11 18:25:06.783902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.545 [2024-07-11 18:25:06.783919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.546 [2024-07-11 18:25:06.783933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.783954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.783993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.784006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.546 [2024-07-11 18:25:06.784019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.784030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.792293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.792374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.546 [2024-07-11 18:25:06.792394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.792406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.798830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.798903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.546 [2024-07-11 18:25:06.798940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.798982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.799056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.799083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.546 [2024-07-11 18:25:06.799096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.799107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.799178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.799207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.546 [2024-07-11 18:25:06.799222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.799233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.799330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.799349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.546 [2024-07-11 18:25:06.799366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.799377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.799440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.799457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:20.546 [2024-07-11 18:25:06.799471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.799482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.799533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.799548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.546 [2024-07-11 18:25:06.799561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.799574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.799631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.546 [2024-07-11 18:25:06.799647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.546 [2024-07-11 18:25:06.799661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.546 [2024-07-11 18:25:06.799671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.546 [2024-07-11 18:25:06.799825] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.275 ms, result 0 00:18:20.805 18:25:07 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:20.805 18:25:07 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:20.805 [2024-07-11 18:25:07.097230] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:20.805 [2024-07-11 18:25:07.097442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90847 ] 00:18:21.064 [2024-07-11 18:25:07.246011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.064 [2024-07-11 18:25:07.279905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.064 [2024-07-11 18:25:07.363529] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:21.064 [2024-07-11 18:25:07.363646] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:21.324 [2024-07-11 18:25:07.522146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.522237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:21.324 [2024-07-11 18:25:07.522274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:21.324 [2024-07-11 18:25:07.522295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.524970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.525040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.324 [2024-07-11 18:25:07.525073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:18:21.324 [2024-07-11 18:25:07.525084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.525243] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:21.324 [2024-07-11 18:25:07.525546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:21.324 [2024-07-11 18:25:07.525600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.525623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.324 [2024-07-11 18:25:07.525651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:18:21.324 [2024-07-11 18:25:07.525662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.527061] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:21.324 [2024-07-11 18:25:07.529291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.529346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:21.324 [2024-07-11 18:25:07.529392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:18:21.324 [2024-07-11 18:25:07.529404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.529496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.529516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:21.324 [2024-07-11 18:25:07.529529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:21.324 [2024-07-11 18:25:07.529543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.533848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.533904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.324 [2024-07-11 18:25:07.533937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.216 ms 00:18:21.324 [2024-07-11 18:25:07.533949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.534090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.534126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.324 [2024-07-11 18:25:07.534171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:21.324 [2024-07-11 18:25:07.534186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.534236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.534253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:21.324 [2024-07-11 18:25:07.534266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:21.324 [2024-07-11 18:25:07.534277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.534309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:21.324 [2024-07-11 18:25:07.535701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.535750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.324 [2024-07-11 18:25:07.535764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:18:21.324 [2024-07-11 18:25:07.535786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.535836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.535861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:21.324 [2024-07-11 18:25:07.535874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:21.324 [2024-07-11 18:25:07.535884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.535921] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:21.324 [2024-07-11 18:25:07.535946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:21.324 [2024-07-11 18:25:07.536014] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:21.324 [2024-07-11 18:25:07.536043] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:21.324 [2024-07-11 18:25:07.536160] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:21.324 [2024-07-11 18:25:07.536179] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:21.324 [2024-07-11 18:25:07.536193] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:21.324 [2024-07-11 18:25:07.536208] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536221] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536232] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:21.324 [2024-07-11 18:25:07.536243] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:21.324 [2024-07-11 18:25:07.536259] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:21.324 [2024-07-11 18:25:07.536270] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:21.324 [2024-07-11 18:25:07.536282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.536293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:21.324 [2024-07-11 18:25:07.536305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:21.324 [2024-07-11 18:25:07.536319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.536415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.324 [2024-07-11 18:25:07.536430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:21.324 [2024-07-11 18:25:07.536442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:21.324 [2024-07-11 18:25:07.536455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.324 [2024-07-11 18:25:07.536564] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:21.324 [2024-07-11 18:25:07.536581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:21.324 [2024-07-11 18:25:07.536607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:21.324 [2024-07-11 18:25:07.536651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:21.324 [2024-07-11 18:25:07.536682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.324 [2024-07-11 18:25:07.536702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:21.324 [2024-07-11 18:25:07.536715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:21.324 [2024-07-11 18:25:07.536726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.324 [2024-07-11 18:25:07.536736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:21.324 [2024-07-11 18:25:07.536747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:21.324 [2024-07-11 18:25:07.536757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:21.324 [2024-07-11 18:25:07.536777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:21.324 [2024-07-11 18:25:07.536807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:21.324 [2024-07-11 18:25:07.536838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:21.324 [2024-07-11 18:25:07.536868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:21.324 [2024-07-11 18:25:07.536904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:21.324 [2024-07-11 18:25:07.536914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:21.324 [2024-07-11 18:25:07.536924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:21.324 [2024-07-11 18:25:07.536934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:21.325 [2024-07-11 18:25:07.536944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.325 [2024-07-11 18:25:07.536955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:21.325 [2024-07-11 18:25:07.536964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:21.325 [2024-07-11 18:25:07.536974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.325 [2024-07-11 18:25:07.536985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:21.325 [2024-07-11 18:25:07.536995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:21.325 [2024-07-11 18:25:07.537006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.325 [2024-07-11 18:25:07.537015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:21.325 [2024-07-11 18:25:07.537025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:21.325 [2024-07-11 18:25:07.537035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.325 [2024-07-11 18:25:07.537048] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:21.325 [2024-07-11 18:25:07.537060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:21.325 [2024-07-11 18:25:07.537071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.325 [2024-07-11 18:25:07.537114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.325 [2024-07-11 18:25:07.537127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:21.325 [2024-07-11 18:25:07.537139] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:21.325 [2024-07-11 18:25:07.537150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:21.325 [2024-07-11 18:25:07.537160] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:21.325 [2024-07-11 18:25:07.537170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:21.325 [2024-07-11 18:25:07.537180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:21.325 [2024-07-11 18:25:07.537192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:21.325 [2024-07-11 18:25:07.537216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.325 [2024-07-11 18:25:07.537234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:21.325 [2024-07-11 18:25:07.537246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:21.325 [2024-07-11 18:25:07.537258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:21.325 [2024-07-11 18:25:07.537269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:21.325 [2024-07-11 18:25:07.537283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:21.325 [2024-07-11 18:25:07.537295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:21.325 [2024-07-11 18:25:07.537307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:21.325 [2024-07-11 18:25:07.537318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:21.325 [2024-07-11 18:25:07.537331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:21.325 [2024-07-11 18:25:07.537342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:21.325 [2024-07-11 18:25:07.537354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:21.325 [2024-07-11 18:25:07.537365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:21.325 [2024-07-11 18:25:07.537376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:21.325 [2024-07-11 18:25:07.537388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:21.325 [2024-07-11 18:25:07.537411] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:21.325 [2024-07-11 18:25:07.537425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.325 [2024-07-11 18:25:07.537437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:21.325 [2024-07-11 18:25:07.537449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:21.325 [2024-07-11 18:25:07.537460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:21.325 [2024-07-11 18:25:07.537472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:21.325 [2024-07-11 18:25:07.537489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.537509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:21.325 [2024-07-11 18:25:07.537530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:18:21.325 [2024-07-11 18:25:07.537542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.555842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.555918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.325 [2024-07-11 18:25:07.555940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.191 ms 00:18:21.325 [2024-07-11 18:25:07.555959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.556173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.556194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:21.325 [2024-07-11 18:25:07.556208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:21.325 [2024-07-11 18:25:07.556218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.563728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.563774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.325 [2024-07-11 18:25:07.563806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.477 ms 00:18:21.325 [2024-07-11 18:25:07.563822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.563894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.563911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.325 [2024-07-11 18:25:07.563922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:21.325 [2024-07-11 18:25:07.563932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.564287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.564311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.325 [2024-07-11 18:25:07.564324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:18:21.325 [2024-07-11 18:25:07.564334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.564545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.564564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.325 [2024-07-11 18:25:07.564576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:18:21.325 [2024-07-11 18:25:07.564587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.569367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.569405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.325 [2024-07-11 18:25:07.569449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:18:21.325 [2024-07-11 18:25:07.569460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.571872] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:21.325 [2024-07-11 18:25:07.571917] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:21.325 [2024-07-11 18:25:07.571952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.571964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:21.325 [2024-07-11 18:25:07.571975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.320 ms 00:18:21.325 [2024-07-11 18:25:07.571985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.586498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.586537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:21.325 [2024-07-11 18:25:07.586575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.461 ms 00:18:21.325 [2024-07-11 18:25:07.586587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.588477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.588514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:21.325 [2024-07-11 18:25:07.588561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:18:21.325 [2024-07-11 18:25:07.588571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.590052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.590103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:21.325 [2024-07-11 18:25:07.590136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:18:21.325 [2024-07-11 18:25:07.590147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.590579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.590610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:21.325 [2024-07-11 18:25:07.590650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:18:21.325 [2024-07-11 18:25:07.590664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.606377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.606457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:21.325 [2024-07-11 18:25:07.606492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.662 ms 00:18:21.325 [2024-07-11 18:25:07.606515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.614310] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:21.325 [2024-07-11 18:25:07.627388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.627451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:21.325 [2024-07-11 18:25:07.627502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.725 ms 00:18:21.325 [2024-07-11 18:25:07.627513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.627654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.627676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:21.325 [2024-07-11 18:25:07.627688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:21.325 [2024-07-11 18:25:07.627699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.627761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.627778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:21.325 [2024-07-11 18:25:07.627790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:21.325 [2024-07-11 18:25:07.627800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.627845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.627858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:21.325 [2024-07-11 18:25:07.627875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:21.325 [2024-07-11 18:25:07.627896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.627931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:21.325 [2024-07-11 18:25:07.627945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.627957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:21.325 [2024-07-11 18:25:07.627968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:21.325 [2024-07-11 18:25:07.627988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.631642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.631681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:21.325 [2024-07-11 18:25:07.631714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:18:21.325 [2024-07-11 18:25:07.631731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.631830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.325 [2024-07-11 18:25:07.631848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:21.325 [2024-07-11 18:25:07.631860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:21.325 [2024-07-11 18:25:07.631870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.325 [2024-07-11 18:25:07.632989] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:21.325 [2024-07-11 18:25:07.634232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 110.505 ms, result 0 00:18:21.325 [2024-07-11 18:25:07.635078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:21.325 [2024-07-11 18:25:07.644356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:32.522  Copying: 26/256 [MB] (26 MBps) Copying: 49/256 [MB] (22 MBps) Copying: 71/256 [MB] (22 MBps) Copying: 94/256 [MB] (22 MBps) Copying: 116/256 [MB] (22 MBps) Copying: 139/256 [MB] (23 MBps) Copying: 163/256 [MB] (23 MBps) Copying: 186/256 [MB] (23 MBps) Copying: 208/256 [MB] (22 MBps) Copying: 231/256 [MB] (23 MBps) Copying: 254/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-11 18:25:18.717104] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.522 [2024-07-11 18:25:18.718348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.718518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.522 [2024-07-11 18:25:18.718705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:32.522 [2024-07-11 18:25:18.718730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.718770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:32.522 [2024-07-11 18:25:18.719255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.719283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.522 [2024-07-11 18:25:18.719296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:18:32.522 [2024-07-11 18:25:18.719316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.719642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.719663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.522 [2024-07-11 18:25:18.719674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:18:32.522 [2024-07-11 18:25:18.719685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.723590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.723617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.522 [2024-07-11 18:25:18.723645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.869 ms 00:18:32.522 [2024-07-11 18:25:18.723655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.730814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.730846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.522 [2024-07-11 18:25:18.730874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.137 ms 00:18:32.522 [2024-07-11 18:25:18.730886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.732353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.732393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.522 [2024-07-11 18:25:18.732408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:18:32.522 [2024-07-11 18:25:18.732420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.735602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.735638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.522 [2024-07-11 18:25:18.735667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:18:32.522 [2024-07-11 18:25:18.735677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.735794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.735814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.522 [2024-07-11 18:25:18.735825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:32.522 [2024-07-11 18:25:18.735843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.737979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.738012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:32.522 [2024-07-11 18:25:18.738024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:18:32.522 [2024-07-11 18:25:18.738034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.522 [2024-07-11 18:25:18.739576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.522 [2024-07-11 18:25:18.739609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:32.523 [2024-07-11 18:25:18.739622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:18:32.523 [2024-07-11 18:25:18.739631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.523 [2024-07-11 18:25:18.740824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.523 [2024-07-11 18:25:18.740871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:32.523 [2024-07-11 18:25:18.740884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:18:32.523 [2024-07-11 18:25:18.740893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.523 [2024-07-11 18:25:18.742232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.523 [2024-07-11 18:25:18.742286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:32.523 [2024-07-11 18:25:18.742301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:18:32.523 [2024-07-11 18:25:18.742312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.523 [2024-07-11 18:25:18.742338] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:32.523 [2024-07-11 18:25:18.742358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.742994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:32.523 [2024-07-11 18:25:18.743491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:32.524 [2024-07-11 18:25:18.743765] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:32.524 [2024-07-11 18:25:18.743775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:18:32.524 [2024-07-11 18:25:18.743786] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:32.524 [2024-07-11 18:25:18.743796] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:32.524 [2024-07-11 18:25:18.743806] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:32.524 [2024-07-11 18:25:18.743817] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:32.524 [2024-07-11 18:25:18.743831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:32.524 [2024-07-11 18:25:18.743842] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:32.524 [2024-07-11 18:25:18.743852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:32.524 [2024-07-11 18:25:18.743861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:32.524 [2024-07-11 18:25:18.743870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:32.524 [2024-07-11 18:25:18.743881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.524 [2024-07-11 18:25:18.743897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:32.524 [2024-07-11 18:25:18.743908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:18:32.524 [2024-07-11 18:25:18.743918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.745281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.524 [2024-07-11 18:25:18.745304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:32.524 [2024-07-11 18:25:18.745329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:18:32.524 [2024-07-11 18:25:18.745341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.745421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.524 [2024-07-11 18:25:18.745435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:32.524 [2024-07-11 18:25:18.745452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:32.524 [2024-07-11 18:25:18.745464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.751219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.751415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.524 [2024-07-11 18:25:18.751645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.751826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.752033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.752198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.524 [2024-07-11 18:25:18.752367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.752474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.752610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.752693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.524 [2024-07-11 18:25:18.752793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.752848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.752934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.753034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.524 [2024-07-11 18:25:18.753155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.753285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.763654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.763991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.524 [2024-07-11 18:25:18.764218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.764303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.772752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.772974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.524 [2024-07-11 18:25:18.773117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.773171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.773328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.773382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.524 [2024-07-11 18:25:18.773423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.773585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.773668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.773740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.524 [2024-07-11 18:25:18.773831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.773892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.774057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.774151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.524 [2024-07-11 18:25:18.774323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.774375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.774587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.774721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:32.524 [2024-07-11 18:25:18.774826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.774874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.775024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.775079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.524 [2024-07-11 18:25:18.775131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.775303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.775414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.524 [2024-07-11 18:25:18.775479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.524 [2024-07-11 18:25:18.775632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.524 [2024-07-11 18:25:18.775741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.524 [2024-07-11 18:25:18.775952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.561 ms, result 0 00:18:32.782 00:18:32.782 00:18:32.782 18:25:19 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:32.782 18:25:19 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:33.349 18:25:19 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:33.349 [2024-07-11 18:25:19.666881] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:33.349 [2024-07-11 18:25:19.667119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90975 ] 00:18:33.607 [2024-07-11 18:25:19.815495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.607 [2024-07-11 18:25:19.849645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.607 [2024-07-11 18:25:19.933919] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.607 [2024-07-11 18:25:19.934013] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:33.867 [2024-07-11 18:25:20.092252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.092315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:33.867 [2024-07-11 18:25:20.092348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.867 [2024-07-11 18:25:20.092358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.094739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.094783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.867 [2024-07-11 18:25:20.094799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.345 ms 00:18:33.867 [2024-07-11 18:25:20.094821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.094923] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:33.867 [2024-07-11 18:25:20.095323] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:33.867 [2024-07-11 18:25:20.095352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.095365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.867 [2024-07-11 18:25:20.095391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:18:33.867 [2024-07-11 18:25:20.095401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.096747] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:33.867 [2024-07-11 18:25:20.099055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.099118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:33.867 [2024-07-11 18:25:20.099149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:18:33.867 [2024-07-11 18:25:20.099159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.099245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.099263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:33.867 [2024-07-11 18:25:20.099275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:33.867 [2024-07-11 18:25:20.099288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.103484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.103519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.867 [2024-07-11 18:25:20.103548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:18:33.867 [2024-07-11 18:25:20.103558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.103710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.103749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.867 [2024-07-11 18:25:20.103763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:33.867 [2024-07-11 18:25:20.103783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.103820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.103834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:33.867 [2024-07-11 18:25:20.103844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:33.867 [2024-07-11 18:25:20.103853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.103878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:33.867 [2024-07-11 18:25:20.105271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.105308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.867 [2024-07-11 18:25:20.105322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:18:33.867 [2024-07-11 18:25:20.105342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.105389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.867 [2024-07-11 18:25:20.105404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:33.867 [2024-07-11 18:25:20.105415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:33.867 [2024-07-11 18:25:20.105424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.867 [2024-07-11 18:25:20.105450] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:33.867 [2024-07-11 18:25:20.105474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:33.867 [2024-07-11 18:25:20.105539] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:33.867 [2024-07-11 18:25:20.105561] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:33.867 [2024-07-11 18:25:20.105665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:33.867 [2024-07-11 18:25:20.105687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:33.868 [2024-07-11 18:25:20.105721] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:33.868 [2024-07-11 18:25:20.105734] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:33.868 [2024-07-11 18:25:20.105744] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:33.868 [2024-07-11 18:25:20.105755] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:33.868 [2024-07-11 18:25:20.105771] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:33.868 [2024-07-11 18:25:20.105783] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:33.868 [2024-07-11 18:25:20.105792] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:33.868 [2024-07-11 18:25:20.105802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.868 [2024-07-11 18:25:20.105812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:33.868 [2024-07-11 18:25:20.105821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:18:33.868 [2024-07-11 18:25:20.105841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.868 [2024-07-11 18:25:20.105926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.868 [2024-07-11 18:25:20.105939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:33.868 [2024-07-11 18:25:20.105948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:33.868 [2024-07-11 18:25:20.105958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.868 [2024-07-11 18:25:20.106049] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:33.868 [2024-07-11 18:25:20.106063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:33.868 [2024-07-11 18:25:20.106082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:33.868 [2024-07-11 18:25:20.106109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106118] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:33.868 [2024-07-11 18:25:20.106176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.868 [2024-07-11 18:25:20.106209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:33.868 [2024-07-11 18:25:20.106222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:33.868 [2024-07-11 18:25:20.106231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:33.868 [2024-07-11 18:25:20.106240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:33.868 [2024-07-11 18:25:20.106266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:33.868 [2024-07-11 18:25:20.106275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:33.868 [2024-07-11 18:25:20.106294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:33.868 [2024-07-11 18:25:20.106321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:33.868 [2024-07-11 18:25:20.106349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106357] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:33.868 [2024-07-11 18:25:20.106378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106392] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:33.868 [2024-07-11 18:25:20.106410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:33.868 [2024-07-11 18:25:20.106438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106446] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.868 [2024-07-11 18:25:20.106455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:33.868 [2024-07-11 18:25:20.106465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:33.868 [2024-07-11 18:25:20.106489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:33.868 [2024-07-11 18:25:20.106516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:33.868 [2024-07-11 18:25:20.106525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:33.868 [2024-07-11 18:25:20.106544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:33.868 [2024-07-11 18:25:20.106574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:33.868 [2024-07-11 18:25:20.106595] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106606] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:33.868 [2024-07-11 18:25:20.106624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:33.868 [2024-07-11 18:25:20.106634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:33.868 [2024-07-11 18:25:20.106685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:33.868 [2024-07-11 18:25:20.106696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:33.868 [2024-07-11 18:25:20.106706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:33.868 [2024-07-11 18:25:20.106716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:33.868 [2024-07-11 18:25:20.106726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:33.868 [2024-07-11 18:25:20.106736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:33.868 [2024-07-11 18:25:20.106748] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:33.868 [2024-07-11 18:25:20.106761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.868 [2024-07-11 18:25:20.106778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:33.868 [2024-07-11 18:25:20.106789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:33.868 [2024-07-11 18:25:20.106800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:33.868 [2024-07-11 18:25:20.106812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:33.868 [2024-07-11 18:25:20.106825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:33.868 [2024-07-11 18:25:20.106836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:33.868 [2024-07-11 18:25:20.106847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:33.868 [2024-07-11 18:25:20.106858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:33.868 [2024-07-11 18:25:20.106869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:33.868 [2024-07-11 18:25:20.106881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:33.868 [2024-07-11 18:25:20.106892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:33.868 [2024-07-11 18:25:20.106902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:33.868 [2024-07-11 18:25:20.106914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:33.868 [2024-07-11 18:25:20.106925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:33.868 [2024-07-11 18:25:20.106947] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:33.868 [2024-07-11 18:25:20.106960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:33.868 [2024-07-11 18:25:20.106988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:33.868 [2024-07-11 18:25:20.106999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:33.868 [2024-07-11 18:25:20.107009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:33.868 [2024-07-11 18:25:20.107019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:33.868 [2024-07-11 18:25:20.107048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.868 [2024-07-11 18:25:20.107059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:33.868 [2024-07-11 18:25:20.107069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:18:33.868 [2024-07-11 18:25:20.107079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.868 [2024-07-11 18:25:20.123841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.868 [2024-07-11 18:25:20.123906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.868 [2024-07-11 18:25:20.123928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.685 ms 00:18:33.868 [2024-07-11 18:25:20.123942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.868 [2024-07-11 18:25:20.124137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.868 [2024-07-11 18:25:20.124188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.868 [2024-07-11 18:25:20.124218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:33.868 [2024-07-11 18:25:20.124229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.868 [2024-07-11 18:25:20.131359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.868 [2024-07-11 18:25:20.131397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.868 [2024-07-11 18:25:20.131427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.097 ms 00:18:33.868 [2024-07-11 18:25:20.131443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.868 [2024-07-11 18:25:20.131500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.868 [2024-07-11 18:25:20.131515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.869 [2024-07-11 18:25:20.131527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:33.869 [2024-07-11 18:25:20.131537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.131895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.131922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.869 [2024-07-11 18:25:20.131934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:18:33.869 [2024-07-11 18:25:20.131944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.132108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.132136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.869 [2024-07-11 18:25:20.132148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:33.869 [2024-07-11 18:25:20.132158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.136837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.136889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.869 [2024-07-11 18:25:20.136903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.651 ms 00:18:33.869 [2024-07-11 18:25:20.136923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.139402] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:33.869 [2024-07-11 18:25:20.139443] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:33.869 [2024-07-11 18:25:20.139478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.139506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:33.869 [2024-07-11 18:25:20.139517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.427 ms 00:18:33.869 [2024-07-11 18:25:20.139527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.153320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.153371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:33.869 [2024-07-11 18:25:20.153405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.726 ms 00:18:33.869 [2024-07-11 18:25:20.153417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.155364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.155415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:33.869 [2024-07-11 18:25:20.155443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.863 ms 00:18:33.869 [2024-07-11 18:25:20.155453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.157042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.157120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:33.869 [2024-07-11 18:25:20.157151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:18:33.869 [2024-07-11 18:25:20.157161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.157558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.157585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.869 [2024-07-11 18:25:20.157597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:18:33.869 [2024-07-11 18:25:20.157607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.173245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.173336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:33.869 [2024-07-11 18:25:20.173369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.577 ms 00:18:33.869 [2024-07-11 18:25:20.173390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.180885] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:33.869 [2024-07-11 18:25:20.193152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.193213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.869 [2024-07-11 18:25:20.193244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.663 ms 00:18:33.869 [2024-07-11 18:25:20.193254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.193367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.193388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:33.869 [2024-07-11 18:25:20.193399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:33.869 [2024-07-11 18:25:20.193409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.193487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.193520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.869 [2024-07-11 18:25:20.193548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:33.869 [2024-07-11 18:25:20.193563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.193606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.193620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.869 [2024-07-11 18:25:20.193635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:33.869 [2024-07-11 18:25:20.193645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.193680] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:33.869 [2024-07-11 18:25:20.193694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.193704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:33.869 [2024-07-11 18:25:20.193714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:33.869 [2024-07-11 18:25:20.193724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.197199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.197251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.869 [2024-07-11 18:25:20.197280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:18:33.869 [2024-07-11 18:25:20.197297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.197401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.869 [2024-07-11 18:25:20.197418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.869 [2024-07-11 18:25:20.197440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:33.869 [2024-07-11 18:25:20.197450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.869 [2024-07-11 18:25:20.198532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:33.869 [2024-07-11 18:25:20.199707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.823 ms, result 0 00:18:33.869 [2024-07-11 18:25:20.200580] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.869 [2024-07-11 18:25:20.209857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:34.129  Copying: 4096/4096 [kB] (average 22 MBps)[2024-07-11 18:25:20.386633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:34.129 [2024-07-11 18:25:20.387438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.387476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:34.129 [2024-07-11 18:25:20.387491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:34.129 [2024-07-11 18:25:20.387501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.387526] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:34.129 [2024-07-11 18:25:20.387960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.387993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:34.129 [2024-07-11 18:25:20.388005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:18:34.129 [2024-07-11 18:25:20.388015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.389547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.389594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:34.129 [2024-07-11 18:25:20.389608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:18:34.129 [2024-07-11 18:25:20.389618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.393280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.393340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:34.129 [2024-07-11 18:25:20.393368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:18:34.129 [2024-07-11 18:25:20.393379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.400043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.400120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:34.129 [2024-07-11 18:25:20.400145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.616 ms 00:18:34.129 [2024-07-11 18:25:20.400155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.401662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.401741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:34.129 [2024-07-11 18:25:20.401769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:18:34.129 [2024-07-11 18:25:20.401794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.404983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.405034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:34.129 [2024-07-11 18:25:20.405047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:18:34.129 [2024-07-11 18:25:20.405056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.405184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.405205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:34.129 [2024-07-11 18:25:20.405216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:34.129 [2024-07-11 18:25:20.405225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.407071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.407179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:34.129 [2024-07-11 18:25:20.407193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:18:34.129 [2024-07-11 18:25:20.407202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.408729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.408793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:34.129 [2024-07-11 18:25:20.408820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:18:34.129 [2024-07-11 18:25:20.408829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.410040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.410117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:34.129 [2024-07-11 18:25:20.410147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:18:34.129 [2024-07-11 18:25:20.410157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.129 [2024-07-11 18:25:20.411416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.129 [2024-07-11 18:25:20.411482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:34.129 [2024-07-11 18:25:20.411494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:18:34.129 [2024-07-11 18:25:20.411503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.130 [2024-07-11 18:25:20.411538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:34.130 [2024-07-11 18:25:20.411557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.411979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:34.130 [2024-07-11 18:25:20.412607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:34.131 [2024-07-11 18:25:20.412619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:34.131 [2024-07-11 18:25:20.412629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:34.131 [2024-07-11 18:25:20.412639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:34.131 [2024-07-11 18:25:20.412649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:34.131 [2024-07-11 18:25:20.412660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:34.131 [2024-07-11 18:25:20.412670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:34.131 [2024-07-11 18:25:20.412688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:34.131 [2024-07-11 18:25:20.412698] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:18:34.131 [2024-07-11 18:25:20.412718] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:34.131 [2024-07-11 18:25:20.412728] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:34.131 [2024-07-11 18:25:20.412738] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:34.131 [2024-07-11 18:25:20.412748] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:34.131 [2024-07-11 18:25:20.412761] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:34.131 [2024-07-11 18:25:20.412772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:34.131 [2024-07-11 18:25:20.412782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:34.131 [2024-07-11 18:25:20.412790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:34.131 [2024-07-11 18:25:20.412799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:34.131 [2024-07-11 18:25:20.412810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.131 [2024-07-11 18:25:20.412823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:34.131 [2024-07-11 18:25:20.412834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:18:34.131 [2024-07-11 18:25:20.412843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.414152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.131 [2024-07-11 18:25:20.414196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:34.131 [2024-07-11 18:25:20.414215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:18:34.131 [2024-07-11 18:25:20.414233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.414317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.131 [2024-07-11 18:25:20.414330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:34.131 [2024-07-11 18:25:20.414341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:34.131 [2024-07-11 18:25:20.414351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.418636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.418712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.131 [2024-07-11 18:25:20.418735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.418746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.418816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.418833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.131 [2024-07-11 18:25:20.418843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.418853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.418919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.418937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.131 [2024-07-11 18:25:20.418953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.418964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.418986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.419014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.131 [2024-07-11 18:25:20.419025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.419034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.426476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.426552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.131 [2024-07-11 18:25:20.426572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.426583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.432843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.432902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.131 [2024-07-11 18:25:20.432943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.432960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.432997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.433009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.131 [2024-07-11 18:25:20.433019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.433038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.433072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.433083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.131 [2024-07-11 18:25:20.433110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.433122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.433238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.433255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.131 [2024-07-11 18:25:20.433266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.433286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.433338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.433353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:34.131 [2024-07-11 18:25:20.433364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.433374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.433417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.433447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.131 [2024-07-11 18:25:20.433462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.433479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.433535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:34.131 [2024-07-11 18:25:20.433560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.131 [2024-07-11 18:25:20.433570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:34.131 [2024-07-11 18:25:20.433580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.131 [2024-07-11 18:25:20.433739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.259 ms, result 0 00:18:34.390 00:18:34.390 00:18:34.390 18:25:20 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=90989 00:18:34.390 18:25:20 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:34.390 18:25:20 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 90989 00:18:34.390 18:25:20 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 90989 ']' 00:18:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.390 18:25:20 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.390 18:25:20 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.390 18:25:20 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.390 18:25:20 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.390 18:25:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:34.390 [2024-07-11 18:25:20.766922] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:34.390 [2024-07-11 18:25:20.767189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90989 ] 00:18:34.649 [2024-07-11 18:25:20.911563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.649 [2024-07-11 18:25:20.944812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.220 18:25:21 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.220 18:25:21 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:18:35.220 18:25:21 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:35.480 [2024-07-11 18:25:21.849179] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.480 [2024-07-11 18:25:21.849274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.740 [2024-07-11 18:25:22.020675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.020738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.740 [2024-07-11 18:25:22.020775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:35.740 [2024-07-11 18:25:22.020786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.023661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.023732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.740 [2024-07-11 18:25:22.023766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.833 ms 00:18:35.740 [2024-07-11 18:25:22.023778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.023920] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.740 [2024-07-11 18:25:22.024325] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.740 [2024-07-11 18:25:22.024379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.024393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.740 [2024-07-11 18:25:22.024423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:18:35.740 [2024-07-11 18:25:22.024449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.025869] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:35.740 [2024-07-11 18:25:22.028150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.028226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:35.740 [2024-07-11 18:25:22.028258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.287 ms 00:18:35.740 [2024-07-11 18:25:22.028270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.028354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.028375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:35.740 [2024-07-11 18:25:22.028387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:35.740 [2024-07-11 18:25:22.028426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.032805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.032858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.740 [2024-07-11 18:25:22.032887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.288 ms 00:18:35.740 [2024-07-11 18:25:22.032899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.033031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.033053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.740 [2024-07-11 18:25:22.033081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:35.740 [2024-07-11 18:25:22.033114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.033166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.033184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.740 [2024-07-11 18:25:22.033196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:35.740 [2024-07-11 18:25:22.033208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.033239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:35.740 [2024-07-11 18:25:22.034557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.034618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.740 [2024-07-11 18:25:22.034638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:18:35.740 [2024-07-11 18:25:22.034677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.034727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.034742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.740 [2024-07-11 18:25:22.034755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:35.740 [2024-07-11 18:25:22.034776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.034819] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:35.740 [2024-07-11 18:25:22.034875] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:35.740 [2024-07-11 18:25:22.034929] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:35.740 [2024-07-11 18:25:22.035000] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:35.740 [2024-07-11 18:25:22.035145] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.740 [2024-07-11 18:25:22.035183] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.740 [2024-07-11 18:25:22.035202] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:35.740 [2024-07-11 18:25:22.035218] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035234] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035255] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:35.740 [2024-07-11 18:25:22.035271] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.740 [2024-07-11 18:25:22.035282] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.740 [2024-07-11 18:25:22.035297] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.740 [2024-07-11 18:25:22.035309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.035322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.740 [2024-07-11 18:25:22.035334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:18:35.740 [2024-07-11 18:25:22.035346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.035438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.035485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.740 [2024-07-11 18:25:22.035496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:35.740 [2024-07-11 18:25:22.035508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.035610] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.740 [2024-07-11 18:25:22.035627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.740 [2024-07-11 18:25:22.035638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.740 [2024-07-11 18:25:22.035677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035699] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.740 [2024-07-11 18:25:22.035711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.740 [2024-07-11 18:25:22.035733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.740 [2024-07-11 18:25:22.035744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:35.740 [2024-07-11 18:25:22.035754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.740 [2024-07-11 18:25:22.035766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.740 [2024-07-11 18:25:22.035776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:35.740 [2024-07-11 18:25:22.035787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.740 [2024-07-11 18:25:22.035808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.740 [2024-07-11 18:25:22.035840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.740 [2024-07-11 18:25:22.035874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.740 [2024-07-11 18:25:22.035904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.740 [2024-07-11 18:25:22.035938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.740 [2024-07-11 18:25:22.035960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.740 [2024-07-11 18:25:22.035969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:35.740 [2024-07-11 18:25:22.035981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.740 [2024-07-11 18:25:22.035991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.740 [2024-07-11 18:25:22.036002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:35.740 [2024-07-11 18:25:22.036012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.740 [2024-07-11 18:25:22.036025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.740 [2024-07-11 18:25:22.036034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:35.740 [2024-07-11 18:25:22.036046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.740 [2024-07-11 18:25:22.036056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.740 [2024-07-11 18:25:22.036067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:35.740 [2024-07-11 18:25:22.036077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.740 [2024-07-11 18:25:22.036088] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.740 [2024-07-11 18:25:22.036098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.740 [2024-07-11 18:25:22.036124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.740 [2024-07-11 18:25:22.036138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.740 [2024-07-11 18:25:22.036151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.740 [2024-07-11 18:25:22.036177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.740 [2024-07-11 18:25:22.036189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.740 [2024-07-11 18:25:22.036200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.740 [2024-07-11 18:25:22.036211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.740 [2024-07-11 18:25:22.036221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.740 [2024-07-11 18:25:22.036237] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.740 [2024-07-11 18:25:22.036250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.740 [2024-07-11 18:25:22.036276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:35.740 [2024-07-11 18:25:22.036287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:35.740 [2024-07-11 18:25:22.036300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:35.740 [2024-07-11 18:25:22.036310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:35.740 [2024-07-11 18:25:22.036323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:35.740 [2024-07-11 18:25:22.036334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:35.740 [2024-07-11 18:25:22.036346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:35.740 [2024-07-11 18:25:22.036357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:35.740 [2024-07-11 18:25:22.036369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:35.740 [2024-07-11 18:25:22.036380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:35.740 [2024-07-11 18:25:22.036392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:35.740 [2024-07-11 18:25:22.036403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:35.740 [2024-07-11 18:25:22.036415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:35.740 [2024-07-11 18:25:22.036426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:35.740 [2024-07-11 18:25:22.036441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.740 [2024-07-11 18:25:22.036455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.740 [2024-07-11 18:25:22.036469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.740 [2024-07-11 18:25:22.036481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.740 [2024-07-11 18:25:22.036494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.740 [2024-07-11 18:25:22.036505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.740 [2024-07-11 18:25:22.036518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.036545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.740 [2024-07-11 18:25:22.036557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:18:35.740 [2024-07-11 18:25:22.036579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.044681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.044745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.740 [2024-07-11 18:25:22.044764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.029 ms 00:18:35.740 [2024-07-11 18:25:22.044776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.044932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.044949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.740 [2024-07-11 18:25:22.044965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:35.740 [2024-07-11 18:25:22.044992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.052971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.053028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.740 [2024-07-11 18:25:22.053061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.933 ms 00:18:35.740 [2024-07-11 18:25:22.053072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.053173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.053190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.740 [2024-07-11 18:25:22.053204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.740 [2024-07-11 18:25:22.053214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.053545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.053571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.740 [2024-07-11 18:25:22.053587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:18:35.740 [2024-07-11 18:25:22.053599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.053749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.053771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.740 [2024-07-11 18:25:22.053787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:35.740 [2024-07-11 18:25:22.053798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.059171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.059228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.740 [2024-07-11 18:25:22.059246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.344 ms 00:18:35.740 [2024-07-11 18:25:22.059256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.061587] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:35.740 [2024-07-11 18:25:22.061640] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:35.740 [2024-07-11 18:25:22.061674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.061685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:35.740 [2024-07-11 18:25:22.061698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:18:35.740 [2024-07-11 18:25:22.061708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.075746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.075797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:35.740 [2024-07-11 18:25:22.075839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.988 ms 00:18:35.740 [2024-07-11 18:25:22.075850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.077825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.077874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:35.740 [2024-07-11 18:25:22.077905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:18:35.740 [2024-07-11 18:25:22.077915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.079627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.079675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:35.740 [2024-07-11 18:25:22.079706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.665 ms 00:18:35.740 [2024-07-11 18:25:22.079715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.080104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.080144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:35.740 [2024-07-11 18:25:22.080160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:18:35.740 [2024-07-11 18:25:22.080171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.107084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.107189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:35.740 [2024-07-11 18:25:22.107237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.860 ms 00:18:35.740 [2024-07-11 18:25:22.107249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.114638] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:35.740 [2024-07-11 18:25:22.126701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.126787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:35.740 [2024-07-11 18:25:22.126806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.322 ms 00:18:35.740 [2024-07-11 18:25:22.126820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.126931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.126952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:35.740 [2024-07-11 18:25:22.126968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:35.740 [2024-07-11 18:25:22.126992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.127085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.127131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:35.740 [2024-07-11 18:25:22.127143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:35.740 [2024-07-11 18:25:22.127182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.127228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.127244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:35.740 [2024-07-11 18:25:22.127256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:35.740 [2024-07-11 18:25:22.127274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.127314] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:35.740 [2024-07-11 18:25:22.127349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.127361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:35.740 [2024-07-11 18:25:22.127374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:35.740 [2024-07-11 18:25:22.127385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.131052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.131132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:35.740 [2024-07-11 18:25:22.131167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:18:35.740 [2024-07-11 18:25:22.131178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.131287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.740 [2024-07-11 18:25:22.131305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:35.740 [2024-07-11 18:25:22.131318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:35.740 [2024-07-11 18:25:22.131329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.740 [2024-07-11 18:25:22.132490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.740 [2024-07-11 18:25:22.133601] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.409 ms, result 0 00:18:35.740 [2024-07-11 18:25:22.134588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.999 Some configs were skipped because the RPC state that can call them passed over. 00:18:35.999 18:25:22 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:36.258 [2024-07-11 18:25:22.417813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.258 [2024-07-11 18:25:22.417909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:36.258 [2024-07-11 18:25:22.417951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:18:36.258 [2024-07-11 18:25:22.417977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.258 [2024-07-11 18:25:22.418036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.970 ms, result 0 00:18:36.258 true 00:18:36.258 18:25:22 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:36.258 [2024-07-11 18:25:22.621457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.258 [2024-07-11 18:25:22.621503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:36.258 [2024-07-11 18:25:22.621538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:18:36.258 [2024-07-11 18:25:22.621550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.258 [2024-07-11 18:25:22.621626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.422 ms, result 0 00:18:36.258 true 00:18:36.258 18:25:22 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 90989 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90989 ']' 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90989 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90989 00:18:36.258 killing process with pid 90989 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90989' 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 90989 00:18:36.258 18:25:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 90989 00:18:36.517 [2024-07-11 18:25:22.764808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.517 [2024-07-11 18:25:22.764892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:36.517 [2024-07-11 18:25:22.764911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:36.517 [2024-07-11 18:25:22.764925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.764955] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:36.518 [2024-07-11 18:25:22.765484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.765510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:36.518 [2024-07-11 18:25:22.765526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:18:36.518 [2024-07-11 18:25:22.765540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.765896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.765930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:36.518 [2024-07-11 18:25:22.765944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:18:36.518 [2024-07-11 18:25:22.765955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.770171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.770243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:36.518 [2024-07-11 18:25:22.770261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:18:36.518 [2024-07-11 18:25:22.770273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.777542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.777589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:36.518 [2024-07-11 18:25:22.777604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.199 ms 00:18:36.518 [2024-07-11 18:25:22.777614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.779115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.779178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:36.518 [2024-07-11 18:25:22.779222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:18:36.518 [2024-07-11 18:25:22.779235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.782835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.782889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:36.518 [2024-07-11 18:25:22.782922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.550 ms 00:18:36.518 [2024-07-11 18:25:22.782936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.783155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.783174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:36.518 [2024-07-11 18:25:22.783188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:36.518 [2024-07-11 18:25:22.783222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.784992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.785056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:36.518 [2024-07-11 18:25:22.785085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:18:36.518 [2024-07-11 18:25:22.785127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.786810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.786847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:36.518 [2024-07-11 18:25:22.786863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:18:36.518 [2024-07-11 18:25:22.786873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.788316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.788351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:36.518 [2024-07-11 18:25:22.788366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:18:36.518 [2024-07-11 18:25:22.788376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.789681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.518 [2024-07-11 18:25:22.789730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:36.518 [2024-07-11 18:25:22.789760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:18:36.518 [2024-07-11 18:25:22.789770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.518 [2024-07-11 18:25:22.789826] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:36.518 [2024-07-11 18:25:22.789848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.789998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:36.518 [2024-07-11 18:25:22.790679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.790997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:36.519 [2024-07-11 18:25:22.791235] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:36.519 [2024-07-11 18:25:22.791247] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:18:36.519 [2024-07-11 18:25:22.791258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:36.519 [2024-07-11 18:25:22.791270] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:36.519 [2024-07-11 18:25:22.791282] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:36.519 [2024-07-11 18:25:22.791295] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:36.519 [2024-07-11 18:25:22.791313] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:36.519 [2024-07-11 18:25:22.791326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:36.519 [2024-07-11 18:25:22.791336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:36.519 [2024-07-11 18:25:22.791347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:36.519 [2024-07-11 18:25:22.791357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:36.519 [2024-07-11 18:25:22.791369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.519 [2024-07-11 18:25:22.791380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:36.519 [2024-07-11 18:25:22.791393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:18:36.519 [2024-07-11 18:25:22.791403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.792741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.519 [2024-07-11 18:25:22.792787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:36.519 [2024-07-11 18:25:22.792803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.309 ms 00:18:36.519 [2024-07-11 18:25:22.792814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.792913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.519 [2024-07-11 18:25:22.792929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:36.519 [2024-07-11 18:25:22.792942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:36.519 [2024-07-11 18:25:22.792952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.798050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.798137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.519 [2024-07-11 18:25:22.798155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.798167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.798280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.798297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.519 [2024-07-11 18:25:22.798311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.798338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.798417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.798435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.519 [2024-07-11 18:25:22.798464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.798475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.798501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.798515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.519 [2024-07-11 18:25:22.798527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.798538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.806360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.806427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.519 [2024-07-11 18:25:22.806456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.806467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.812871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.812931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.519 [2024-07-11 18:25:22.812963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.812974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.813058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.813076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.519 [2024-07-11 18:25:22.813089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.813179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.813259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.813284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.519 [2024-07-11 18:25:22.813298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.813308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.813399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.813416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.519 [2024-07-11 18:25:22.813433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.813444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.813499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.813516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:36.519 [2024-07-11 18:25:22.813529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.813540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.519 [2024-07-11 18:25:22.813597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.519 [2024-07-11 18:25:22.813620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:36.519 [2024-07-11 18:25:22.813634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.519 [2024-07-11 18:25:22.813646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.520 [2024-07-11 18:25:22.813710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:36.520 [2024-07-11 18:25:22.813726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:36.520 [2024-07-11 18:25:22.813739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:36.520 [2024-07-11 18:25:22.813749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.520 [2024-07-11 18:25:22.813902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.079 ms, result 0 00:18:36.778 18:25:23 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.778 [2024-07-11 18:25:23.086120] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:36.778 [2024-07-11 18:25:23.086304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91030 ] 00:18:37.036 [2024-07-11 18:25:23.225810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.036 [2024-07-11 18:25:23.258366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.036 [2024-07-11 18:25:23.339839] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:37.036 [2024-07-11 18:25:23.339949] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:37.297 [2024-07-11 18:25:23.500079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.500157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:37.297 [2024-07-11 18:25:23.500191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:37.297 [2024-07-11 18:25:23.500201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.502568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.502624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:37.297 [2024-07-11 18:25:23.502638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.340 ms 00:18:37.297 [2024-07-11 18:25:23.502657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.502773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:37.297 [2024-07-11 18:25:23.503110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:37.297 [2024-07-11 18:25:23.503147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.503165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:37.297 [2024-07-11 18:25:23.503187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:18:37.297 [2024-07-11 18:25:23.503197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.504430] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:37.297 [2024-07-11 18:25:23.506629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.506706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:37.297 [2024-07-11 18:25:23.506737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:18:37.297 [2024-07-11 18:25:23.506761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.506847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.506867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:37.297 [2024-07-11 18:25:23.506888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:37.297 [2024-07-11 18:25:23.506902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.511204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.511252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:37.297 [2024-07-11 18:25:23.511282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.246 ms 00:18:37.297 [2024-07-11 18:25:23.511292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.511418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.511437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:37.297 [2024-07-11 18:25:23.511468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:37.297 [2024-07-11 18:25:23.511493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.511539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.511553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:37.297 [2024-07-11 18:25:23.511564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:37.297 [2024-07-11 18:25:23.511582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.511626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:37.297 [2024-07-11 18:25:23.512972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.513025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:37.297 [2024-07-11 18:25:23.513048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:18:37.297 [2024-07-11 18:25:23.513058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.513146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.513163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:37.297 [2024-07-11 18:25:23.513175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:37.297 [2024-07-11 18:25:23.513185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.513228] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:37.297 [2024-07-11 18:25:23.513252] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:37.297 [2024-07-11 18:25:23.513317] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:37.297 [2024-07-11 18:25:23.513340] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:37.297 [2024-07-11 18:25:23.513456] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:37.297 [2024-07-11 18:25:23.513472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:37.297 [2024-07-11 18:25:23.513486] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:37.297 [2024-07-11 18:25:23.513499] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:37.297 [2024-07-11 18:25:23.513511] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:37.297 [2024-07-11 18:25:23.513522] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:37.297 [2024-07-11 18:25:23.513533] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:37.297 [2024-07-11 18:25:23.513548] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:37.297 [2024-07-11 18:25:23.513558] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:37.297 [2024-07-11 18:25:23.513569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.513579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:37.297 [2024-07-11 18:25:23.513590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:18:37.297 [2024-07-11 18:25:23.513622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.513711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.297 [2024-07-11 18:25:23.513734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:37.297 [2024-07-11 18:25:23.513752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:37.297 [2024-07-11 18:25:23.513762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.297 [2024-07-11 18:25:23.513873] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:37.297 [2024-07-11 18:25:23.513888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:37.297 [2024-07-11 18:25:23.513900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:37.297 [2024-07-11 18:25:23.513910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.297 [2024-07-11 18:25:23.513921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:37.297 [2024-07-11 18:25:23.513930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:37.297 [2024-07-11 18:25:23.513940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:37.297 [2024-07-11 18:25:23.513949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:37.297 [2024-07-11 18:25:23.513959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:37.297 [2024-07-11 18:25:23.513970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:37.297 [2024-07-11 18:25:23.513980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:37.297 [2024-07-11 18:25:23.513993] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:37.297 [2024-07-11 18:25:23.514003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:37.297 [2024-07-11 18:25:23.514012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:37.297 [2024-07-11 18:25:23.514021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:37.297 [2024-07-11 18:25:23.514030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.297 [2024-07-11 18:25:23.514040] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:37.297 [2024-07-11 18:25:23.514049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:37.297 [2024-07-11 18:25:23.514058] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.297 [2024-07-11 18:25:23.514067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:37.298 [2024-07-11 18:25:23.514076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.298 [2024-07-11 18:25:23.514095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:37.298 [2024-07-11 18:25:23.514104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.298 [2024-07-11 18:25:23.514122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:37.298 [2024-07-11 18:25:23.514131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.298 [2024-07-11 18:25:23.514168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:37.298 [2024-07-11 18:25:23.514180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.298 [2024-07-11 18:25:23.514199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:37.298 [2024-07-11 18:25:23.514208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:37.298 [2024-07-11 18:25:23.514226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:37.298 [2024-07-11 18:25:23.514235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:37.298 [2024-07-11 18:25:23.514244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:37.298 [2024-07-11 18:25:23.514253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:37.298 [2024-07-11 18:25:23.514262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:37.298 [2024-07-11 18:25:23.514271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:37.298 [2024-07-11 18:25:23.514289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:37.298 [2024-07-11 18:25:23.514298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514310] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:37.298 [2024-07-11 18:25:23.514321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:37.298 [2024-07-11 18:25:23.514331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:37.298 [2024-07-11 18:25:23.514341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.298 [2024-07-11 18:25:23.514360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:37.298 [2024-07-11 18:25:23.514370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:37.298 [2024-07-11 18:25:23.514380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:37.298 [2024-07-11 18:25:23.514389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:37.298 [2024-07-11 18:25:23.514398] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:37.298 [2024-07-11 18:25:23.514407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:37.298 [2024-07-11 18:25:23.514418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:37.298 [2024-07-11 18:25:23.514438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:37.298 [2024-07-11 18:25:23.514454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:37.298 [2024-07-11 18:25:23.514464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:37.298 [2024-07-11 18:25:23.514474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:37.298 [2024-07-11 18:25:23.514484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:37.298 [2024-07-11 18:25:23.514496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:37.298 [2024-07-11 18:25:23.514507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:37.298 [2024-07-11 18:25:23.514517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:37.298 [2024-07-11 18:25:23.514527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:37.298 [2024-07-11 18:25:23.514537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:37.298 [2024-07-11 18:25:23.514547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:37.298 [2024-07-11 18:25:23.514557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:37.298 [2024-07-11 18:25:23.514567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:37.298 [2024-07-11 18:25:23.514576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:37.298 [2024-07-11 18:25:23.514587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:37.298 [2024-07-11 18:25:23.514606] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:37.298 [2024-07-11 18:25:23.514618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:37.298 [2024-07-11 18:25:23.514629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:37.298 [2024-07-11 18:25:23.514639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:37.298 [2024-07-11 18:25:23.514659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:37.298 [2024-07-11 18:25:23.514688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:37.298 [2024-07-11 18:25:23.514704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.514716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:37.298 [2024-07-11 18:25:23.514727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:18:37.298 [2024-07-11 18:25:23.514738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.534120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.534192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.298 [2024-07-11 18:25:23.534231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.311 ms 00:18:37.298 [2024-07-11 18:25:23.534251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.534477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.534513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:37.298 [2024-07-11 18:25:23.534531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:18:37.298 [2024-07-11 18:25:23.534545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.543802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.543857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.298 [2024-07-11 18:25:23.543888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:18:37.298 [2024-07-11 18:25:23.543903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.543966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.543982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.298 [2024-07-11 18:25:23.543994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:37.298 [2024-07-11 18:25:23.544003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.544330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.544357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.298 [2024-07-11 18:25:23.544369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:18:37.298 [2024-07-11 18:25:23.544379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.544549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.544567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.298 [2024-07-11 18:25:23.544579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:18:37.298 [2024-07-11 18:25:23.544597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.549405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.549469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.298 [2024-07-11 18:25:23.549499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.780 ms 00:18:37.298 [2024-07-11 18:25:23.549511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.551804] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:37.298 [2024-07-11 18:25:23.551859] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:37.298 [2024-07-11 18:25:23.551894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.551905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:37.298 [2024-07-11 18:25:23.551916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:18:37.298 [2024-07-11 18:25:23.551926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.566316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.566380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:37.298 [2024-07-11 18:25:23.566414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.339 ms 00:18:37.298 [2024-07-11 18:25:23.566425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.568272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.568307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:37.298 [2024-07-11 18:25:23.568353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.756 ms 00:18:37.298 [2024-07-11 18:25:23.568363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.569899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.569950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:37.298 [2024-07-11 18:25:23.569980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:18:37.298 [2024-07-11 18:25:23.569990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.298 [2024-07-11 18:25:23.570401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.298 [2024-07-11 18:25:23.570431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:37.298 [2024-07-11 18:25:23.570444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:18:37.299 [2024-07-11 18:25:23.570469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.586374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.586457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:37.299 [2024-07-11 18:25:23.586491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.858 ms 00:18:37.299 [2024-07-11 18:25:23.586503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.594531] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:37.299 [2024-07-11 18:25:23.607366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.607432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:37.299 [2024-07-11 18:25:23.607491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.762 ms 00:18:37.299 [2024-07-11 18:25:23.607501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.607616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.607643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:37.299 [2024-07-11 18:25:23.607655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:37.299 [2024-07-11 18:25:23.607671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.607767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.607782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:37.299 [2024-07-11 18:25:23.607794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:37.299 [2024-07-11 18:25:23.607804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.607849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.607862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:37.299 [2024-07-11 18:25:23.607877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:37.299 [2024-07-11 18:25:23.607887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.607922] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:37.299 [2024-07-11 18:25:23.607937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.607948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:37.299 [2024-07-11 18:25:23.607960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:37.299 [2024-07-11 18:25:23.607969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.611555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.611606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:37.299 [2024-07-11 18:25:23.611637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.558 ms 00:18:37.299 [2024-07-11 18:25:23.611654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.611747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.299 [2024-07-11 18:25:23.611765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:37.299 [2024-07-11 18:25:23.611776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:37.299 [2024-07-11 18:25:23.611785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.299 [2024-07-11 18:25:23.612929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:37.299 [2024-07-11 18:25:23.614125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.433 ms, result 0 00:18:37.299 [2024-07-11 18:25:23.615030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:37.299 [2024-07-11 18:25:23.624317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:49.003  Copying: 25/256 [MB] (25 MBps) Copying: 47/256 [MB] (22 MBps) Copying: 69/256 [MB] (21 MBps) Copying: 91/256 [MB] (21 MBps) Copying: 113/256 [MB] (22 MBps) Copying: 136/256 [MB] (22 MBps) Copying: 158/256 [MB] (22 MBps) Copying: 180/256 [MB] (22 MBps) Copying: 203/256 [MB] (22 MBps) Copying: 225/256 [MB] (21 MBps) Copying: 247/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-11 18:25:35.379989] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:49.003 [2024-07-11 18:25:35.382859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.382926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:49.003 [2024-07-11 18:25:35.382951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:49.003 [2024-07-11 18:25:35.382966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.383005] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:49.003 [2024-07-11 18:25:35.383614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.383652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:49.003 [2024-07-11 18:25:35.383670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:18:49.003 [2024-07-11 18:25:35.383683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.384052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.384113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:49.003 [2024-07-11 18:25:35.384131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:18:49.003 [2024-07-11 18:25:35.384145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.388856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.388909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:49.003 [2024-07-11 18:25:35.388927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.679 ms 00:18:49.003 [2024-07-11 18:25:35.388940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.399446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.399527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:49.003 [2024-07-11 18:25:35.399563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.465 ms 00:18:49.003 [2024-07-11 18:25:35.399577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.401350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.401401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:49.003 [2024-07-11 18:25:35.401420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:18:49.003 [2024-07-11 18:25:35.401434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.404827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.404902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:49.003 [2024-07-11 18:25:35.404927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:18:49.003 [2024-07-11 18:25:35.404953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.405160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.405212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:49.003 [2024-07-11 18:25:35.405230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:49.003 [2024-07-11 18:25:35.405244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.407312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.407360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:49.003 [2024-07-11 18:25:35.407378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:18:49.003 [2024-07-11 18:25:35.407392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.409124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.409169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:49.003 [2024-07-11 18:25:35.409187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.680 ms 00:18:49.003 [2024-07-11 18:25:35.409200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.410639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.410703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:49.003 [2024-07-11 18:25:35.410723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:18:49.003 [2024-07-11 18:25:35.410735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.412005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.003 [2024-07-11 18:25:35.412056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:49.003 [2024-07-11 18:25:35.412073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:18:49.003 [2024-07-11 18:25:35.412106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.003 [2024-07-11 18:25:35.412157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:49.003 [2024-07-11 18:25:35.412186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.412999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:49.003 [2024-07-11 18:25:35.413858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.413872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.413890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.413909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.413927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.413952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.413972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.413987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.414001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.414016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.414030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.414044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.414058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:49.004 [2024-07-11 18:25:35.414104] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:49.004 [2024-07-11 18:25:35.414136] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b71244a-3e55-46f4-8731-7cb3003edfa4 00:18:49.004 [2024-07-11 18:25:35.414184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:49.004 [2024-07-11 18:25:35.414203] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:49.004 [2024-07-11 18:25:35.414217] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:49.004 [2024-07-11 18:25:35.414244] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:49.004 [2024-07-11 18:25:35.414266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:49.004 [2024-07-11 18:25:35.414279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:49.004 [2024-07-11 18:25:35.414312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:49.004 [2024-07-11 18:25:35.414335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:49.004 [2024-07-11 18:25:35.414375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:49.004 [2024-07-11 18:25:35.414391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.004 [2024-07-11 18:25:35.414413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:49.004 [2024-07-11 18:25:35.414428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:18:49.004 [2024-07-11 18:25:35.414441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.262 [2024-07-11 18:25:35.416205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.262 [2024-07-11 18:25:35.416246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:49.262 [2024-07-11 18:25:35.416273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:18:49.262 [2024-07-11 18:25:35.416286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.262 [2024-07-11 18:25:35.416387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.262 [2024-07-11 18:25:35.416405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:49.262 [2024-07-11 18:25:35.416421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:49.262 [2024-07-11 18:25:35.416434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.262 [2024-07-11 18:25:35.423613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.262 [2024-07-11 18:25:35.423689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.262 [2024-07-11 18:25:35.423717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.423732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.423855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.423877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.263 [2024-07-11 18:25:35.423893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.423905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.423991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.424027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.263 [2024-07-11 18:25:35.424058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.424166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.424219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.424236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.263 [2024-07-11 18:25:35.424262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.424274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.434503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.434641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.263 [2024-07-11 18:25:35.434668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.434706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.442795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.442871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.263 [2024-07-11 18:25:35.442892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.442907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.442962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.442979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.263 [2024-07-11 18:25:35.443010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.443023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.443119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.443147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.263 [2024-07-11 18:25:35.443162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.443175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.443302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.443339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.263 [2024-07-11 18:25:35.443362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.443389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.443484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.443530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:49.263 [2024-07-11 18:25:35.443548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.443561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.443619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.443637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.263 [2024-07-11 18:25:35.443651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.443672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.443761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.263 [2024-07-11 18:25:35.443797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.263 [2024-07-11 18:25:35.443812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.263 [2024-07-11 18:25:35.443825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.263 [2024-07-11 18:25:35.444062] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.149 ms, result 0 00:18:49.263 00:18:49.263 00:18:49.263 18:25:35 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:49.830 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:49.830 18:25:36 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:49.830 18:25:36 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:49.830 18:25:36 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:49.830 18:25:36 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.830 18:25:36 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:49.830 18:25:36 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:50.088 18:25:36 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 90989 00:18:50.088 18:25:36 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90989 ']' 00:18:50.088 18:25:36 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90989 00:18:50.088 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (90989) - No such process 00:18:50.088 Process with pid 90989 is not found 00:18:50.088 18:25:36 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 90989 is not found' 00:18:50.088 00:18:50.088 real 0m55.983s 00:18:50.088 user 1m17.641s 00:18:50.088 sys 0m5.714s 00:18:50.088 18:25:36 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:50.088 18:25:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:50.088 ************************************ 00:18:50.088 END TEST ftl_trim 00:18:50.088 ************************************ 00:18:50.088 18:25:36 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:50.088 18:25:36 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:50.088 18:25:36 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:50.088 18:25:36 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:50.088 18:25:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:50.088 ************************************ 00:18:50.088 START TEST ftl_restore 00:18:50.088 ************************************ 00:18:50.088 18:25:36 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:50.088 * Looking for test storage... 00:18:50.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.DqBjtSDMSi 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:50.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=91221 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:50.088 18:25:36 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 91221 00:18:50.088 18:25:36 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 91221 ']' 00:18:50.088 18:25:36 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.088 18:25:36 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.088 18:25:36 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.088 18:25:36 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.088 18:25:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:50.346 [2024-07-11 18:25:36.532717] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:50.346 [2024-07-11 18:25:36.533135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91221 ] 00:18:50.346 [2024-07-11 18:25:36.682903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.346 [2024-07-11 18:25:36.726473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.283 18:25:37 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.283 18:25:37 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:18:51.284 18:25:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:51.284 18:25:37 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:51.284 18:25:37 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:51.284 18:25:37 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:51.284 18:25:37 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:51.284 18:25:37 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:51.542 18:25:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:51.542 18:25:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:51.542 18:25:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:51.542 18:25:37 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:51.542 18:25:37 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:51.542 18:25:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:51.542 18:25:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:51.542 18:25:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:51.801 18:25:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:51.801 { 00:18:51.801 "name": "nvme0n1", 00:18:51.801 "aliases": [ 00:18:51.801 "462acbbb-25bc-4628-9eda-d085087b643c" 00:18:51.801 ], 00:18:51.801 "product_name": "NVMe disk", 00:18:51.801 "block_size": 4096, 00:18:51.801 "num_blocks": 1310720, 00:18:51.801 "uuid": "462acbbb-25bc-4628-9eda-d085087b643c", 00:18:51.801 "assigned_rate_limits": { 00:18:51.801 "rw_ios_per_sec": 0, 00:18:51.801 "rw_mbytes_per_sec": 0, 00:18:51.801 "r_mbytes_per_sec": 0, 00:18:51.801 "w_mbytes_per_sec": 0 00:18:51.801 }, 00:18:51.801 "claimed": true, 00:18:51.801 "claim_type": "read_many_write_one", 00:18:51.801 "zoned": false, 00:18:51.801 "supported_io_types": { 00:18:51.801 "read": true, 00:18:51.801 "write": true, 00:18:51.801 "unmap": true, 00:18:51.801 "flush": true, 00:18:51.801 "reset": true, 00:18:51.801 "nvme_admin": true, 00:18:51.801 "nvme_io": true, 00:18:51.801 "nvme_io_md": false, 00:18:51.801 "write_zeroes": true, 00:18:51.801 "zcopy": false, 00:18:51.801 "get_zone_info": false, 00:18:51.801 "zone_management": false, 00:18:51.801 "zone_append": false, 00:18:51.801 "compare": true, 00:18:51.801 "compare_and_write": false, 00:18:51.801 "abort": true, 00:18:51.801 "seek_hole": false, 00:18:51.801 "seek_data": false, 00:18:51.801 "copy": true, 00:18:51.801 "nvme_iov_md": false 00:18:51.801 }, 00:18:51.801 "driver_specific": { 00:18:51.801 "nvme": [ 00:18:51.801 { 00:18:51.801 "pci_address": "0000:00:11.0", 00:18:51.801 "trid": { 00:18:51.801 "trtype": "PCIe", 00:18:51.801 "traddr": "0000:00:11.0" 00:18:51.801 }, 00:18:51.801 "ctrlr_data": { 00:18:51.801 "cntlid": 0, 00:18:51.801 "vendor_id": "0x1b36", 00:18:51.801 "model_number": "QEMU NVMe Ctrl", 00:18:51.801 "serial_number": "12341", 00:18:51.801 "firmware_revision": "8.0.0", 00:18:51.801 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:51.801 "oacs": { 00:18:51.801 "security": 0, 00:18:51.801 "format": 1, 00:18:51.801 "firmware": 0, 00:18:51.801 "ns_manage": 1 00:18:51.801 }, 00:18:51.801 "multi_ctrlr": false, 00:18:51.801 "ana_reporting": false 00:18:51.801 }, 00:18:51.801 "vs": { 00:18:51.801 "nvme_version": "1.4" 00:18:51.801 }, 00:18:51.801 "ns_data": { 00:18:51.801 "id": 1, 00:18:51.801 "can_share": false 00:18:51.801 } 00:18:51.801 } 00:18:51.801 ], 00:18:51.801 "mp_policy": "active_passive" 00:18:51.801 } 00:18:51.801 } 00:18:51.801 ]' 00:18:51.801 18:25:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:51.801 18:25:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:51.801 18:25:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:51.801 18:25:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:51.801 18:25:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:51.801 18:25:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:51.801 18:25:38 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:51.801 18:25:38 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:51.801 18:25:38 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:51.801 18:25:38 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:51.801 18:25:38 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:52.060 18:25:38 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=4b925f1c-7b4f-46d1-88e7-3c9b99f88dea 00:18:52.060 18:25:38 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:52.060 18:25:38 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b925f1c-7b4f-46d1-88e7-3c9b99f88dea 00:18:52.319 18:25:38 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:52.578 18:25:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=d806e8eb-8f79-4bb0-ac95-c3ec123a11b2 00:18:52.578 18:25:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d806e8eb-8f79-4bb0-ac95-c3ec123a11b2 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:52.838 18:25:39 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:52.838 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:52.838 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:52.838 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:52.838 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:52.838 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:53.097 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:53.097 { 00:18:53.097 "name": "0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7", 00:18:53.097 "aliases": [ 00:18:53.097 "lvs/nvme0n1p0" 00:18:53.097 ], 00:18:53.097 "product_name": "Logical Volume", 00:18:53.097 "block_size": 4096, 00:18:53.097 "num_blocks": 26476544, 00:18:53.097 "uuid": "0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7", 00:18:53.097 "assigned_rate_limits": { 00:18:53.097 "rw_ios_per_sec": 0, 00:18:53.097 "rw_mbytes_per_sec": 0, 00:18:53.097 "r_mbytes_per_sec": 0, 00:18:53.097 "w_mbytes_per_sec": 0 00:18:53.097 }, 00:18:53.097 "claimed": false, 00:18:53.097 "zoned": false, 00:18:53.097 "supported_io_types": { 00:18:53.097 "read": true, 00:18:53.097 "write": true, 00:18:53.097 "unmap": true, 00:18:53.097 "flush": false, 00:18:53.097 "reset": true, 00:18:53.097 "nvme_admin": false, 00:18:53.097 "nvme_io": false, 00:18:53.097 "nvme_io_md": false, 00:18:53.097 "write_zeroes": true, 00:18:53.097 "zcopy": false, 00:18:53.097 "get_zone_info": false, 00:18:53.097 "zone_management": false, 00:18:53.097 "zone_append": false, 00:18:53.097 "compare": false, 00:18:53.097 "compare_and_write": false, 00:18:53.097 "abort": false, 00:18:53.097 "seek_hole": true, 00:18:53.097 "seek_data": true, 00:18:53.097 "copy": false, 00:18:53.097 "nvme_iov_md": false 00:18:53.097 }, 00:18:53.097 "driver_specific": { 00:18:53.097 "lvol": { 00:18:53.097 "lvol_store_uuid": "d806e8eb-8f79-4bb0-ac95-c3ec123a11b2", 00:18:53.097 "base_bdev": "nvme0n1", 00:18:53.097 "thin_provision": true, 00:18:53.097 "num_allocated_clusters": 0, 00:18:53.097 "snapshot": false, 00:18:53.097 "clone": false, 00:18:53.097 "esnap_clone": false 00:18:53.097 } 00:18:53.097 } 00:18:53.097 } 00:18:53.097 ]' 00:18:53.097 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:53.097 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:53.097 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:53.356 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:53.356 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:53.356 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:53.356 18:25:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:53.356 18:25:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:53.356 18:25:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:53.618 18:25:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:53.618 18:25:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:53.618 18:25:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:53.618 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:53.618 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:53.618 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:53.618 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:53.618 18:25:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:53.877 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:53.877 { 00:18:53.877 "name": "0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7", 00:18:53.877 "aliases": [ 00:18:53.877 "lvs/nvme0n1p0" 00:18:53.877 ], 00:18:53.877 "product_name": "Logical Volume", 00:18:53.877 "block_size": 4096, 00:18:53.877 "num_blocks": 26476544, 00:18:53.877 "uuid": "0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7", 00:18:53.877 "assigned_rate_limits": { 00:18:53.877 "rw_ios_per_sec": 0, 00:18:53.877 "rw_mbytes_per_sec": 0, 00:18:53.877 "r_mbytes_per_sec": 0, 00:18:53.877 "w_mbytes_per_sec": 0 00:18:53.877 }, 00:18:53.877 "claimed": false, 00:18:53.877 "zoned": false, 00:18:53.877 "supported_io_types": { 00:18:53.877 "read": true, 00:18:53.877 "write": true, 00:18:53.877 "unmap": true, 00:18:53.877 "flush": false, 00:18:53.877 "reset": true, 00:18:53.877 "nvme_admin": false, 00:18:53.877 "nvme_io": false, 00:18:53.877 "nvme_io_md": false, 00:18:53.877 "write_zeroes": true, 00:18:53.877 "zcopy": false, 00:18:53.877 "get_zone_info": false, 00:18:53.877 "zone_management": false, 00:18:53.877 "zone_append": false, 00:18:53.877 "compare": false, 00:18:53.877 "compare_and_write": false, 00:18:53.877 "abort": false, 00:18:53.877 "seek_hole": true, 00:18:53.877 "seek_data": true, 00:18:53.877 "copy": false, 00:18:53.877 "nvme_iov_md": false 00:18:53.877 }, 00:18:53.877 "driver_specific": { 00:18:53.878 "lvol": { 00:18:53.878 "lvol_store_uuid": "d806e8eb-8f79-4bb0-ac95-c3ec123a11b2", 00:18:53.878 "base_bdev": "nvme0n1", 00:18:53.878 "thin_provision": true, 00:18:53.878 "num_allocated_clusters": 0, 00:18:53.878 "snapshot": false, 00:18:53.878 "clone": false, 00:18:53.878 "esnap_clone": false 00:18:53.878 } 00:18:53.878 } 00:18:53.878 } 00:18:53.878 ]' 00:18:53.878 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:53.878 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:53.878 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:53.878 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:53.878 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:53.878 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:53.878 18:25:40 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:53.878 18:25:40 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:54.137 18:25:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:54.137 18:25:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:54.137 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:54.137 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:54.137 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:54.137 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:54.137 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 00:18:54.396 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:54.396 { 00:18:54.396 "name": "0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7", 00:18:54.396 "aliases": [ 00:18:54.396 "lvs/nvme0n1p0" 00:18:54.396 ], 00:18:54.396 "product_name": "Logical Volume", 00:18:54.396 "block_size": 4096, 00:18:54.396 "num_blocks": 26476544, 00:18:54.396 "uuid": "0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7", 00:18:54.396 "assigned_rate_limits": { 00:18:54.396 "rw_ios_per_sec": 0, 00:18:54.396 "rw_mbytes_per_sec": 0, 00:18:54.396 "r_mbytes_per_sec": 0, 00:18:54.396 "w_mbytes_per_sec": 0 00:18:54.396 }, 00:18:54.396 "claimed": false, 00:18:54.396 "zoned": false, 00:18:54.396 "supported_io_types": { 00:18:54.396 "read": true, 00:18:54.396 "write": true, 00:18:54.396 "unmap": true, 00:18:54.396 "flush": false, 00:18:54.396 "reset": true, 00:18:54.396 "nvme_admin": false, 00:18:54.396 "nvme_io": false, 00:18:54.396 "nvme_io_md": false, 00:18:54.396 "write_zeroes": true, 00:18:54.396 "zcopy": false, 00:18:54.396 "get_zone_info": false, 00:18:54.396 "zone_management": false, 00:18:54.396 "zone_append": false, 00:18:54.396 "compare": false, 00:18:54.396 "compare_and_write": false, 00:18:54.396 "abort": false, 00:18:54.396 "seek_hole": true, 00:18:54.396 "seek_data": true, 00:18:54.396 "copy": false, 00:18:54.396 "nvme_iov_md": false 00:18:54.396 }, 00:18:54.396 "driver_specific": { 00:18:54.396 "lvol": { 00:18:54.396 "lvol_store_uuid": "d806e8eb-8f79-4bb0-ac95-c3ec123a11b2", 00:18:54.396 "base_bdev": "nvme0n1", 00:18:54.396 "thin_provision": true, 00:18:54.396 "num_allocated_clusters": 0, 00:18:54.396 "snapshot": false, 00:18:54.396 "clone": false, 00:18:54.396 "esnap_clone": false 00:18:54.396 } 00:18:54.396 } 00:18:54.396 } 00:18:54.396 ]' 00:18:54.396 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:54.396 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:54.396 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:54.396 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:54.396 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:54.397 18:25:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:54.397 18:25:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:54.397 18:25:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 --l2p_dram_limit 10' 00:18:54.397 18:25:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:54.397 18:25:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:54.397 18:25:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:54.397 18:25:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:54.397 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:54.397 18:25:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0c1b30db-a0a5-46cd-a0bf-a2fcb4799fb7 --l2p_dram_limit 10 -c nvc0n1p0 00:18:54.656 [2024-07-11 18:25:40.916378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.656 [2024-07-11 18:25:40.916428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.656 [2024-07-11 18:25:40.916466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.656 [2024-07-11 18:25:40.916489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.656 [2024-07-11 18:25:40.916569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.656 [2024-07-11 18:25:40.916592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.656 [2024-07-11 18:25:40.916616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:54.656 [2024-07-11 18:25:40.916627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.656 [2024-07-11 18:25:40.916669] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.656 [2024-07-11 18:25:40.916979] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.656 [2024-07-11 18:25:40.917008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.656 [2024-07-11 18:25:40.917020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.656 [2024-07-11 18:25:40.917049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:18:54.656 [2024-07-11 18:25:40.917060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.656 [2024-07-11 18:25:40.917191] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2261f664-bc0a-4247-828d-02457d49b5ee 00:18:54.656 [2024-07-11 18:25:40.918212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.656 [2024-07-11 18:25:40.918243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:54.656 [2024-07-11 18:25:40.918259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:54.656 [2024-07-11 18:25:40.918271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.656 [2024-07-11 18:25:40.922421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.656 [2024-07-11 18:25:40.922498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.656 [2024-07-11 18:25:40.922520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.102 ms 00:18:54.656 [2024-07-11 18:25:40.922533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.656 [2024-07-11 18:25:40.922624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.656 [2024-07-11 18:25:40.922647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.656 [2024-07-11 18:25:40.922659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:54.656 [2024-07-11 18:25:40.922671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.656 [2024-07-11 18:25:40.922766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.656 [2024-07-11 18:25:40.922787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.656 [2024-07-11 18:25:40.922799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:54.657 [2024-07-11 18:25:40.922811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.657 [2024-07-11 18:25:40.922841] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.657 [2024-07-11 18:25:40.924249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.657 [2024-07-11 18:25:40.924299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.657 [2024-07-11 18:25:40.924317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:18:54.657 [2024-07-11 18:25:40.924328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.657 [2024-07-11 18:25:40.924372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.657 [2024-07-11 18:25:40.924393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.657 [2024-07-11 18:25:40.924407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:54.657 [2024-07-11 18:25:40.924418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.657 [2024-07-11 18:25:40.924459] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:54.657 [2024-07-11 18:25:40.924597] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.657 [2024-07-11 18:25:40.924625] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.657 [2024-07-11 18:25:40.924639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:54.657 [2024-07-11 18:25:40.924654] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.657 [2024-07-11 18:25:40.924667] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.657 [2024-07-11 18:25:40.924679] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:54.657 [2024-07-11 18:25:40.924698] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.657 [2024-07-11 18:25:40.924709] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.657 [2024-07-11 18:25:40.924719] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.657 [2024-07-11 18:25:40.924732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.657 [2024-07-11 18:25:40.924742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.657 [2024-07-11 18:25:40.924754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:54.657 [2024-07-11 18:25:40.924764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.657 [2024-07-11 18:25:40.924849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.657 [2024-07-11 18:25:40.924863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.657 [2024-07-11 18:25:40.924877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:54.657 [2024-07-11 18:25:40.924888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.657 [2024-07-11 18:25:40.924996] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.657 [2024-07-11 18:25:40.925013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.657 [2024-07-11 18:25:40.925025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.657 [2024-07-11 18:25:40.925060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.657 [2024-07-11 18:25:40.925091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.657 [2024-07-11 18:25:40.925133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.657 [2024-07-11 18:25:40.925143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:54.657 [2024-07-11 18:25:40.925155] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.657 [2024-07-11 18:25:40.925165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.657 [2024-07-11 18:25:40.925177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:54.657 [2024-07-11 18:25:40.925187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.657 [2024-07-11 18:25:40.925207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.657 [2024-07-11 18:25:40.925241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.657 [2024-07-11 18:25:40.925270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.657 [2024-07-11 18:25:40.925301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925310] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925320] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.657 [2024-07-11 18:25:40.925329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.657 [2024-07-11 18:25:40.925364] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.657 [2024-07-11 18:25:40.925384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.657 [2024-07-11 18:25:40.925393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:54.657 [2024-07-11 18:25:40.925404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.657 [2024-07-11 18:25:40.925413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.657 [2024-07-11 18:25:40.925424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:54.657 [2024-07-11 18:25:40.925433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.657 [2024-07-11 18:25:40.925453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:54.657 [2024-07-11 18:25:40.925463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925472] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.657 [2024-07-11 18:25:40.925484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.657 [2024-07-11 18:25:40.925496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.657 [2024-07-11 18:25:40.925520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.657 [2024-07-11 18:25:40.925532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.657 [2024-07-11 18:25:40.925541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.657 [2024-07-11 18:25:40.925552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.657 [2024-07-11 18:25:40.925561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.657 [2024-07-11 18:25:40.925572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.657 [2024-07-11 18:25:40.925598] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.657 [2024-07-11 18:25:40.925615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.657 [2024-07-11 18:25:40.925626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:54.657 [2024-07-11 18:25:40.925639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:54.657 [2024-07-11 18:25:40.925650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:54.657 [2024-07-11 18:25:40.925661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:54.657 [2024-07-11 18:25:40.925671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:54.657 [2024-07-11 18:25:40.925683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:54.657 [2024-07-11 18:25:40.925693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:54.657 [2024-07-11 18:25:40.925706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:54.657 [2024-07-11 18:25:40.925717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:54.657 [2024-07-11 18:25:40.925728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:54.657 [2024-07-11 18:25:40.925738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:54.657 [2024-07-11 18:25:40.925750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:54.657 [2024-07-11 18:25:40.925759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:54.657 [2024-07-11 18:25:40.925771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:54.657 [2024-07-11 18:25:40.925782] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.657 [2024-07-11 18:25:40.925795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.657 [2024-07-11 18:25:40.925806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.657 [2024-07-11 18:25:40.925818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.657 [2024-07-11 18:25:40.925828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.657 [2024-07-11 18:25:40.925840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.657 [2024-07-11 18:25:40.925851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.657 [2024-07-11 18:25:40.925863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.658 [2024-07-11 18:25:40.925874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:18:54.658 [2024-07-11 18:25:40.925888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.658 [2024-07-11 18:25:40.925947] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:54.658 [2024-07-11 18:25:40.925974] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:57.206 [2024-07-11 18:25:43.065572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.065885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:57.206 [2024-07-11 18:25:43.066013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2139.635 ms 00:18:57.206 [2024-07-11 18:25:43.066066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.073484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.073738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.206 [2024-07-11 18:25:43.073860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.205 ms 00:18:57.206 [2024-07-11 18:25:43.073985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.074165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.074225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:57.206 [2024-07-11 18:25:43.074333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:57.206 [2024-07-11 18:25:43.074395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.082625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.082861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.206 [2024-07-11 18:25:43.082995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.059 ms 00:18:57.206 [2024-07-11 18:25:43.083078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.083265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.083390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.206 [2024-07-11 18:25:43.083415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:57.206 [2024-07-11 18:25:43.083429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.083816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.083838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.206 [2024-07-11 18:25:43.083859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:18:57.206 [2024-07-11 18:25:43.083872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.084007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.084031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.206 [2024-07-11 18:25:43.084043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:57.206 [2024-07-11 18:25:43.084055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.089706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.089765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.206 [2024-07-11 18:25:43.089789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.629 ms 00:18:57.206 [2024-07-11 18:25:43.089802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.097997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:57.206 [2024-07-11 18:25:43.100858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.100891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.206 [2024-07-11 18:25:43.100927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.950 ms 00:18:57.206 [2024-07-11 18:25:43.100938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.158308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.158381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:57.206 [2024-07-11 18:25:43.158426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.328 ms 00:18:57.206 [2024-07-11 18:25:43.158438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.158649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.158675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.206 [2024-07-11 18:25:43.158699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:18:57.206 [2024-07-11 18:25:43.158741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.206 [2024-07-11 18:25:43.162501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.206 [2024-07-11 18:25:43.162544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:57.206 [2024-07-11 18:25:43.162564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.703 ms 00:18:57.207 [2024-07-11 18:25:43.162589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.166060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.166137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:57.207 [2024-07-11 18:25:43.166159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.419 ms 00:18:57.207 [2024-07-11 18:25:43.166170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.166530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.166581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.207 [2024-07-11 18:25:43.166606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:18:57.207 [2024-07-11 18:25:43.166618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.198895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.198964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:57.207 [2024-07-11 18:25:43.198991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.238 ms 00:18:57.207 [2024-07-11 18:25:43.199007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.203725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.203764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:57.207 [2024-07-11 18:25:43.203800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.590 ms 00:18:57.207 [2024-07-11 18:25:43.203812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.207725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.207766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:57.207 [2024-07-11 18:25:43.207784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.864 ms 00:18:57.207 [2024-07-11 18:25:43.207795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.212172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.212239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.207 [2024-07-11 18:25:43.212276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.326 ms 00:18:57.207 [2024-07-11 18:25:43.212288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.212360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.212377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.207 [2024-07-11 18:25:43.212391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:57.207 [2024-07-11 18:25:43.212402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.212475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.207 [2024-07-11 18:25:43.212491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.207 [2024-07-11 18:25:43.212504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:57.207 [2024-07-11 18:25:43.212516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.207 [2024-07-11 18:25:43.213738] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2296.827 ms, result 0 00:18:57.207 { 00:18:57.207 "name": "ftl0", 00:18:57.207 "uuid": "2261f664-bc0a-4247-828d-02457d49b5ee" 00:18:57.207 } 00:18:57.207 18:25:43 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:57.207 18:25:43 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:57.207 18:25:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:57.207 18:25:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:57.466 [2024-07-11 18:25:43.746646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.746726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.466 [2024-07-11 18:25:43.746749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:57.466 [2024-07-11 18:25:43.746764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.746799] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.466 [2024-07-11 18:25:43.747283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.747308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.466 [2024-07-11 18:25:43.747358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:18:57.466 [2024-07-11 18:25:43.747370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.747667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.747683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.466 [2024-07-11 18:25:43.747696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:18:57.466 [2024-07-11 18:25:43.747711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.750660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.750724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.466 [2024-07-11 18:25:43.750761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:18:57.466 [2024-07-11 18:25:43.750774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.756781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.756811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.466 [2024-07-11 18:25:43.756837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.978 ms 00:18:57.466 [2024-07-11 18:25:43.756848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.758372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.758412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.466 [2024-07-11 18:25:43.758463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:18:57.466 [2024-07-11 18:25:43.758474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.762631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.762671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.466 [2024-07-11 18:25:43.762746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:18:57.466 [2024-07-11 18:25:43.762758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.762891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.762911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.466 [2024-07-11 18:25:43.762931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:57.466 [2024-07-11 18:25:43.762942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.764871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.764921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:57.466 [2024-07-11 18:25:43.764938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.901 ms 00:18:57.466 [2024-07-11 18:25:43.764948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.766592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.766628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:57.466 [2024-07-11 18:25:43.766647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:18:57.466 [2024-07-11 18:25:43.766657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.768001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.768038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.466 [2024-07-11 18:25:43.768072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:18:57.466 [2024-07-11 18:25:43.768082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.769387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.466 [2024-07-11 18:25:43.769421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.466 [2024-07-11 18:25:43.769454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:18:57.466 [2024-07-11 18:25:43.769464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.466 [2024-07-11 18:25:43.769508] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.466 [2024-07-11 18:25:43.769530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.769990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.466 [2024-07-11 18:25:43.770002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.467 [2024-07-11 18:25:43.770781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.467 [2024-07-11 18:25:43.770797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2261f664-bc0a-4247-828d-02457d49b5ee 00:18:57.467 [2024-07-11 18:25:43.770809] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:57.467 [2024-07-11 18:25:43.770822] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:57.467 [2024-07-11 18:25:43.770833] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:57.467 [2024-07-11 18:25:43.770847] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:57.467 [2024-07-11 18:25:43.770858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.467 [2024-07-11 18:25:43.770875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.467 [2024-07-11 18:25:43.770887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.467 [2024-07-11 18:25:43.770899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.467 [2024-07-11 18:25:43.770910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.467 [2024-07-11 18:25:43.770923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.467 [2024-07-11 18:25:43.770934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.467 [2024-07-11 18:25:43.770949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:18:57.467 [2024-07-11 18:25:43.770960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.772368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.467 [2024-07-11 18:25:43.772400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.467 [2024-07-11 18:25:43.772418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:18:57.467 [2024-07-11 18:25:43.772431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.772505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.467 [2024-07-11 18:25:43.772520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.467 [2024-07-11 18:25:43.772533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:57.467 [2024-07-11 18:25:43.772543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.777484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.777524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.467 [2024-07-11 18:25:43.777542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.777554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.777611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.777633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.467 [2024-07-11 18:25:43.777655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.777665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.777756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.777774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.467 [2024-07-11 18:25:43.777790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.777801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.777829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.777841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.467 [2024-07-11 18:25:43.777853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.777863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.786049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.786125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.467 [2024-07-11 18:25:43.786163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.786176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.792778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.792829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.467 [2024-07-11 18:25:43.792865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.792876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.792948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.792964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.467 [2024-07-11 18:25:43.792979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.792990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.793062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.793078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.467 [2024-07-11 18:25:43.793090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.793160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.793269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.793287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.467 [2024-07-11 18:25:43.793301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.793312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.793363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.793382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.467 [2024-07-11 18:25:43.793395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.793408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.793488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.793503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.467 [2024-07-11 18:25:43.793534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.793546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.793604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.467 [2024-07-11 18:25:43.793628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.467 [2024-07-11 18:25:43.793642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.467 [2024-07-11 18:25:43.793653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.467 [2024-07-11 18:25:43.793820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.114 ms, result 0 00:18:57.467 true 00:18:57.467 18:25:43 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 91221 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 91221 ']' 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 91221 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91221 00:18:57.467 killing process with pid 91221 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91221' 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 91221 00:18:57.467 18:25:43 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 91221 00:19:00.758 18:25:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:04.950 262144+0 records in 00:19:04.950 262144+0 records out 00:19:04.950 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.17074 s, 257 MB/s 00:19:04.950 18:25:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:06.850 18:25:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.850 [2024-07-11 18:25:53.066430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:06.850 [2024-07-11 18:25:53.066633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91430 ] 00:19:06.850 [2024-07-11 18:25:53.217606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.850 [2024-07-11 18:25:53.261234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.108 [2024-07-11 18:25:53.353321] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.108 [2024-07-11 18:25:53.353418] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:07.108 [2024-07-11 18:25:53.511353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.511403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:07.108 [2024-07-11 18:25:53.511438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:07.108 [2024-07-11 18:25:53.511458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.511530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.511551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.108 [2024-07-11 18:25:53.511581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:07.108 [2024-07-11 18:25:53.511591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.511624] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:07.108 [2024-07-11 18:25:53.511875] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:07.108 [2024-07-11 18:25:53.511902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.511913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.108 [2024-07-11 18:25:53.511924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:19:07.108 [2024-07-11 18:25:53.511945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.513184] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:07.108 [2024-07-11 18:25:53.515404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.515441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:07.108 [2024-07-11 18:25:53.515485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:19:07.108 [2024-07-11 18:25:53.515502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.515564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.515582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:07.108 [2024-07-11 18:25:53.515593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:07.108 [2024-07-11 18:25:53.515602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.520189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.520227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.108 [2024-07-11 18:25:53.520272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.516 ms 00:19:07.108 [2024-07-11 18:25:53.520282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.520395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.520417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.108 [2024-07-11 18:25:53.520428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:07.108 [2024-07-11 18:25:53.520450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.520523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.108 [2024-07-11 18:25:53.520539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:07.108 [2024-07-11 18:25:53.520555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:07.108 [2024-07-11 18:25:53.520565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.108 [2024-07-11 18:25:53.520598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:07.368 [2024-07-11 18:25:53.522291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.368 [2024-07-11 18:25:53.522474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.368 [2024-07-11 18:25:53.522609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:19:07.368 [2024-07-11 18:25:53.522657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.368 [2024-07-11 18:25:53.522885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.368 [2024-07-11 18:25:53.522941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:07.368 [2024-07-11 18:25:53.523010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:07.368 [2024-07-11 18:25:53.523084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.368 [2024-07-11 18:25:53.523224] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:07.368 [2024-07-11 18:25:53.523411] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:07.368 [2024-07-11 18:25:53.523582] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:07.368 [2024-07-11 18:25:53.523611] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:07.368 [2024-07-11 18:25:53.523735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:07.368 [2024-07-11 18:25:53.523752] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:07.368 [2024-07-11 18:25:53.523781] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:07.368 [2024-07-11 18:25:53.523805] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:07.368 [2024-07-11 18:25:53.523825] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:07.368 [2024-07-11 18:25:53.523837] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:07.368 [2024-07-11 18:25:53.523847] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:07.368 [2024-07-11 18:25:53.523856] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:07.368 [2024-07-11 18:25:53.523866] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:07.368 [2024-07-11 18:25:53.523877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.368 [2024-07-11 18:25:53.523888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:07.368 [2024-07-11 18:25:53.523920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:19:07.368 [2024-07-11 18:25:53.523939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.368 [2024-07-11 18:25:53.524079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.368 [2024-07-11 18:25:53.524111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:07.368 [2024-07-11 18:25:53.524128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:07.368 [2024-07-11 18:25:53.524140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.368 [2024-07-11 18:25:53.524276] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:07.368 [2024-07-11 18:25:53.524296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:07.368 [2024-07-11 18:25:53.524309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:07.368 [2024-07-11 18:25:53.524352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:07.368 [2024-07-11 18:25:53.524384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.368 [2024-07-11 18:25:53.524405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:07.368 [2024-07-11 18:25:53.524416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:07.368 [2024-07-11 18:25:53.524426] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:07.368 [2024-07-11 18:25:53.524436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:07.368 [2024-07-11 18:25:53.524450] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:07.368 [2024-07-11 18:25:53.524461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:07.368 [2024-07-11 18:25:53.524483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:07.368 [2024-07-11 18:25:53.524515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:07.368 [2024-07-11 18:25:53.524546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:07.368 [2024-07-11 18:25:53.524577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:07.368 [2024-07-11 18:25:53.524608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:07.368 [2024-07-11 18:25:53.524646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.368 [2024-07-11 18:25:53.524666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:07.368 [2024-07-11 18:25:53.524677] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:07.368 [2024-07-11 18:25:53.524687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:07.368 [2024-07-11 18:25:53.524697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:07.368 [2024-07-11 18:25:53.524707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:07.368 [2024-07-11 18:25:53.524718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:07.368 [2024-07-11 18:25:53.524739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:07.368 [2024-07-11 18:25:53.524749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524759] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:07.368 [2024-07-11 18:25:53.524770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:07.368 [2024-07-11 18:25:53.524781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:07.368 [2024-07-11 18:25:53.524821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:07.368 [2024-07-11 18:25:53.524834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:07.368 [2024-07-11 18:25:53.524855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:07.368 [2024-07-11 18:25:53.524865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:07.368 [2024-07-11 18:25:53.524875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:07.368 [2024-07-11 18:25:53.524884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:07.368 [2024-07-11 18:25:53.524896] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:07.368 [2024-07-11 18:25:53.524925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.368 [2024-07-11 18:25:53.524938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:07.368 [2024-07-11 18:25:53.524949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:07.368 [2024-07-11 18:25:53.524975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:07.368 [2024-07-11 18:25:53.524987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:07.368 [2024-07-11 18:25:53.524999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:07.368 [2024-07-11 18:25:53.525024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:07.368 [2024-07-11 18:25:53.525036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:07.368 [2024-07-11 18:25:53.525064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:07.368 [2024-07-11 18:25:53.525091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:07.368 [2024-07-11 18:25:53.525103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:07.368 [2024-07-11 18:25:53.525114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:07.368 [2024-07-11 18:25:53.525125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:07.368 [2024-07-11 18:25:53.525137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:07.368 [2024-07-11 18:25:53.525148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:07.368 [2024-07-11 18:25:53.525160] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:07.369 [2024-07-11 18:25:53.525172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:07.369 [2024-07-11 18:25:53.525212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:07.369 [2024-07-11 18:25:53.525224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:07.369 [2024-07-11 18:25:53.525236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:07.369 [2024-07-11 18:25:53.525248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:07.369 [2024-07-11 18:25:53.525260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.525272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:07.369 [2024-07-11 18:25:53.525289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:19:07.369 [2024-07-11 18:25:53.525302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.545307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.545379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.369 [2024-07-11 18:25:53.545405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.922 ms 00:19:07.369 [2024-07-11 18:25:53.545421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.545581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.545617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:07.369 [2024-07-11 18:25:53.545635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:07.369 [2024-07-11 18:25:53.545679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.555649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.555695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.369 [2024-07-11 18:25:53.555727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.863 ms 00:19:07.369 [2024-07-11 18:25:53.555737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.555782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.555797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.369 [2024-07-11 18:25:53.555815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:07.369 [2024-07-11 18:25:53.555824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.556216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.556274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.369 [2024-07-11 18:25:53.556294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:19:07.369 [2024-07-11 18:25:53.556334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.556562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.556605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.369 [2024-07-11 18:25:53.556632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:19:07.369 [2024-07-11 18:25:53.556647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.561635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.561673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.369 [2024-07-11 18:25:53.561705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.960 ms 00:19:07.369 [2024-07-11 18:25:53.561715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.564029] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:07.369 [2024-07-11 18:25:53.564070] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:07.369 [2024-07-11 18:25:53.564144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.564165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:07.369 [2024-07-11 18:25:53.564183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:19:07.369 [2024-07-11 18:25:53.564206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.578231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.578271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:07.369 [2024-07-11 18:25:53.578315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.918 ms 00:19:07.369 [2024-07-11 18:25:53.578326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.580211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.580249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:07.369 [2024-07-11 18:25:53.580279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:19:07.369 [2024-07-11 18:25:53.580289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.581881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.581920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:07.369 [2024-07-11 18:25:53.581934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:19:07.369 [2024-07-11 18:25:53.581943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.582340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.582385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:07.369 [2024-07-11 18:25:53.582399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:19:07.369 [2024-07-11 18:25:53.582409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.599220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.599290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:07.369 [2024-07-11 18:25:53.599326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.779 ms 00:19:07.369 [2024-07-11 18:25:53.599336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.606654] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:07.369 [2024-07-11 18:25:53.608913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.608948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:07.369 [2024-07-11 18:25:53.608979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.518 ms 00:19:07.369 [2024-07-11 18:25:53.608989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.609053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.609079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:07.369 [2024-07-11 18:25:53.609091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:07.369 [2024-07-11 18:25:53.609144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.609318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.609356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:07.369 [2024-07-11 18:25:53.609388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:07.369 [2024-07-11 18:25:53.609409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.609462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.609514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:07.369 [2024-07-11 18:25:53.609527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:07.369 [2024-07-11 18:25:53.609538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.609579] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:07.369 [2024-07-11 18:25:53.609594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.609617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:07.369 [2024-07-11 18:25:53.609636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:07.369 [2024-07-11 18:25:53.609649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.613341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.613535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:07.369 [2024-07-11 18:25:53.613659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:19:07.369 [2024-07-11 18:25:53.613717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.613927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.369 [2024-07-11 18:25:53.613987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:07.369 [2024-07-11 18:25:53.614039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:07.369 [2024-07-11 18:25:53.614200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.369 [2024-07-11 18:25:53.615643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.635 ms, result 0 00:19:51.007  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 68/1024 [MB] (23 MBps) Copying: 91/1024 [MB] (23 MBps) Copying: 114/1024 [MB] (22 MBps) Copying: 137/1024 [MB] (23 MBps) Copying: 160/1024 [MB] (22 MBps) Copying: 183/1024 [MB] (23 MBps) Copying: 206/1024 [MB] (22 MBps) Copying: 229/1024 [MB] (23 MBps) Copying: 252/1024 [MB] (23 MBps) Copying: 276/1024 [MB] (23 MBps) Copying: 299/1024 [MB] (23 MBps) Copying: 322/1024 [MB] (23 MBps) Copying: 346/1024 [MB] (23 MBps) Copying: 370/1024 [MB] (23 MBps) Copying: 393/1024 [MB] (23 MBps) Copying: 418/1024 [MB] (24 MBps) Copying: 442/1024 [MB] (24 MBps) Copying: 466/1024 [MB] (23 MBps) Copying: 490/1024 [MB] (23 MBps) Copying: 513/1024 [MB] (23 MBps) Copying: 538/1024 [MB] (24 MBps) Copying: 562/1024 [MB] (23 MBps) Copying: 586/1024 [MB] (23 MBps) Copying: 609/1024 [MB] (23 MBps) Copying: 633/1024 [MB] (23 MBps) Copying: 657/1024 [MB] (23 MBps) Copying: 680/1024 [MB] (23 MBps) Copying: 704/1024 [MB] (23 MBps) Copying: 728/1024 [MB] (23 MBps) Copying: 752/1024 [MB] (23 MBps) Copying: 775/1024 [MB] (23 MBps) Copying: 798/1024 [MB] (23 MBps) Copying: 821/1024 [MB] (23 MBps) Copying: 844/1024 [MB] (23 MBps) Copying: 868/1024 [MB] (23 MBps) Copying: 891/1024 [MB] (23 MBps) Copying: 915/1024 [MB] (23 MBps) Copying: 938/1024 [MB] (23 MBps) Copying: 962/1024 [MB] (23 MBps) Copying: 986/1024 [MB] (23 MBps) Copying: 1011/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 18:26:37.133826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.134013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.007 [2024-07-11 18:26:37.134053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.007 [2024-07-11 18:26:37.134066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.134137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.007 [2024-07-11 18:26:37.134583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.134604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.007 [2024-07-11 18:26:37.134618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:19:51.007 [2024-07-11 18:26:37.134628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.136222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.136301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.007 [2024-07-11 18:26:37.136316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:19:51.007 [2024-07-11 18:26:37.136335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.151090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.151143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.007 [2024-07-11 18:26:37.151195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.733 ms 00:19:51.007 [2024-07-11 18:26:37.151205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.157058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.157114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.007 [2024-07-11 18:26:37.157156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.786 ms 00:19:51.007 [2024-07-11 18:26:37.157176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.158492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.158544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.007 [2024-07-11 18:26:37.158575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:19:51.007 [2024-07-11 18:26:37.158585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.161936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.161975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.007 [2024-07-11 18:26:37.162006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:19:51.007 [2024-07-11 18:26:37.162016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.162174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.162194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.007 [2024-07-11 18:26:37.162206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:51.007 [2024-07-11 18:26:37.162224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.164175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.164258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:51.007 [2024-07-11 18:26:37.164306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:19:51.007 [2024-07-11 18:26:37.164316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.165800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.165836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:51.007 [2024-07-11 18:26:37.165865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:19:51.007 [2024-07-11 18:26:37.165875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.167065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.167166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.007 [2024-07-11 18:26:37.167198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:19:51.007 [2024-07-11 18:26:37.167208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.168428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.007 [2024-07-11 18:26:37.168463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.007 [2024-07-11 18:26:37.168494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:19:51.007 [2024-07-11 18:26:37.168504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.007 [2024-07-11 18:26:37.168538] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.007 [2024-07-11 18:26:37.168560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.007 [2024-07-11 18:26:37.168717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.168990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.008 [2024-07-11 18:26:37.169662] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.008 [2024-07-11 18:26:37.169673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2261f664-bc0a-4247-828d-02457d49b5ee 00:19:51.008 [2024-07-11 18:26:37.169683] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.008 [2024-07-11 18:26:37.169693] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.008 [2024-07-11 18:26:37.169703] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.008 [2024-07-11 18:26:37.169713] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.008 [2024-07-11 18:26:37.169722] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.008 [2024-07-11 18:26:37.169732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.008 [2024-07-11 18:26:37.169747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.008 [2024-07-11 18:26:37.169756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.008 [2024-07-11 18:26:37.169765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.009 [2024-07-11 18:26:37.169775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.009 [2024-07-11 18:26:37.169785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.009 [2024-07-11 18:26:37.169796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:19:51.009 [2024-07-11 18:26:37.169805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.171183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.009 [2024-07-11 18:26:37.171233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.009 [2024-07-11 18:26:37.171246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:19:51.009 [2024-07-11 18:26:37.171257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.171349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.009 [2024-07-11 18:26:37.171363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.009 [2024-07-11 18:26:37.171374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:51.009 [2024-07-11 18:26:37.171384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.176562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.176824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.009 [2024-07-11 18:26:37.176998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.177157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.177344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.177461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.009 [2024-07-11 18:26:37.177568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.177727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.177907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.178031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.009 [2024-07-11 18:26:37.178053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.178090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.178137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.178159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.009 [2024-07-11 18:26:37.178172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.178182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.186965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.187040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.009 [2024-07-11 18:26:37.187059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.187102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.194222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.194266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.009 [2024-07-11 18:26:37.194314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.194325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.194389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.194405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.009 [2024-07-11 18:26:37.194416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.194426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.194455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.194486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.009 [2024-07-11 18:26:37.194504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.194514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.194602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.194636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.009 [2024-07-11 18:26:37.194647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.194671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.194711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.194726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.009 [2024-07-11 18:26:37.194742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.194752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.194793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.194835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.009 [2024-07-11 18:26:37.194851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.194862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.194926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.009 [2024-07-11 18:26:37.194955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.009 [2024-07-11 18:26:37.194967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.009 [2024-07-11 18:26:37.194978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.009 [2024-07-11 18:26:37.195181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.254 ms, result 0 00:19:51.575 00:19:51.575 00:19:51.575 18:26:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:51.575 [2024-07-11 18:26:37.946627] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:51.575 [2024-07-11 18:26:37.946884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91883 ] 00:19:51.833 [2024-07-11 18:26:38.094497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.833 [2024-07-11 18:26:38.128583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.833 [2024-07-11 18:26:38.213206] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.833 [2024-07-11 18:26:38.213315] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.092 [2024-07-11 18:26:38.369307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.369374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.092 [2024-07-11 18:26:38.369420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.092 [2024-07-11 18:26:38.369461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.369530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.369549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.092 [2024-07-11 18:26:38.369565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:52.092 [2024-07-11 18:26:38.369577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.369622] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.092 [2024-07-11 18:26:38.369924] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.092 [2024-07-11 18:26:38.369949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.369960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.092 [2024-07-11 18:26:38.369972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:19:52.092 [2024-07-11 18:26:38.369985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.371339] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.092 [2024-07-11 18:26:38.373601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.373641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.092 [2024-07-11 18:26:38.373678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.265 ms 00:19:52.092 [2024-07-11 18:26:38.373699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.373765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.373784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.092 [2024-07-11 18:26:38.373796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:52.092 [2024-07-11 18:26:38.373807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.378487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.378531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.092 [2024-07-11 18:26:38.378562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:19:52.092 [2024-07-11 18:26:38.378574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.378714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.378734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.092 [2024-07-11 18:26:38.378770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:52.092 [2024-07-11 18:26:38.378781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.378882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.378901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.092 [2024-07-11 18:26:38.378920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:52.092 [2024-07-11 18:26:38.378932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.378966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.092 [2024-07-11 18:26:38.380492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.380525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.092 [2024-07-11 18:26:38.380556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:19:52.092 [2024-07-11 18:26:38.380567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.380620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.380655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.092 [2024-07-11 18:26:38.380667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:52.092 [2024-07-11 18:26:38.380677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.380715] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.092 [2024-07-11 18:26:38.380758] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.092 [2024-07-11 18:26:38.380830] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.092 [2024-07-11 18:26:38.380852] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:52.092 [2024-07-11 18:26:38.380976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.092 [2024-07-11 18:26:38.380992] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.092 [2024-07-11 18:26:38.381006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:52.092 [2024-07-11 18:26:38.381020] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.092 [2024-07-11 18:26:38.381033] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.092 [2024-07-11 18:26:38.381059] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:52.092 [2024-07-11 18:26:38.381077] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.092 [2024-07-11 18:26:38.381103] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.092 [2024-07-11 18:26:38.381121] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.092 [2024-07-11 18:26:38.381134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.381145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.092 [2024-07-11 18:26:38.381162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:19:52.092 [2024-07-11 18:26:38.381173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.381359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-07-11 18:26:38.381417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.092 [2024-07-11 18:26:38.381434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:52.092 [2024-07-11 18:26:38.381446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-07-11 18:26:38.381600] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.092 [2024-07-11 18:26:38.381616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.092 [2024-07-11 18:26:38.381630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.092 [2024-07-11 18:26:38.381656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.092 [2024-07-11 18:26:38.381668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.092 [2024-07-11 18:26:38.381679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.092 [2024-07-11 18:26:38.381704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:52.092 [2024-07-11 18:26:38.381714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.092 [2024-07-11 18:26:38.381725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.093 [2024-07-11 18:26:38.381745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.093 [2024-07-11 18:26:38.381756] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:52.093 [2024-07-11 18:26:38.381766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.093 [2024-07-11 18:26:38.381776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.093 [2024-07-11 18:26:38.381790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:52.093 [2024-07-11 18:26:38.381800] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.093 [2024-07-11 18:26:38.381823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:52.093 [2024-07-11 18:26:38.381833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.093 [2024-07-11 18:26:38.381854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.093 [2024-07-11 18:26:38.381874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.093 [2024-07-11 18:26:38.381884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381894] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.093 [2024-07-11 18:26:38.381904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.093 [2024-07-11 18:26:38.381914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381924] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.093 [2024-07-11 18:26:38.381935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.093 [2024-07-11 18:26:38.381945] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.093 [2024-07-11 18:26:38.381971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.093 [2024-07-11 18:26:38.381981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:52.093 [2024-07-11 18:26:38.381991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.093 [2024-07-11 18:26:38.382001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.093 [2024-07-11 18:26:38.382011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:52.093 [2024-07-11 18:26:38.382021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.093 [2024-07-11 18:26:38.382031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.093 [2024-07-11 18:26:38.382041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:52.093 [2024-07-11 18:26:38.382051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.093 [2024-07-11 18:26:38.382062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.093 [2024-07-11 18:26:38.382072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:52.093 [2024-07-11 18:26:38.382082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.093 [2024-07-11 18:26:38.382091] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.093 [2024-07-11 18:26:38.382111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.093 [2024-07-11 18:26:38.382124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.093 [2024-07-11 18:26:38.382138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.093 [2024-07-11 18:26:38.382150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.093 [2024-07-11 18:26:38.382162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.093 [2024-07-11 18:26:38.382173] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.093 [2024-07-11 18:26:38.382183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.093 [2024-07-11 18:26:38.382207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.093 [2024-07-11 18:26:38.382221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.093 [2024-07-11 18:26:38.382249] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.093 [2024-07-11 18:26:38.382263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.093 [2024-07-11 18:26:38.382292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:52.093 [2024-07-11 18:26:38.382305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:52.093 [2024-07-11 18:26:38.382317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:52.093 [2024-07-11 18:26:38.382328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:52.093 [2024-07-11 18:26:38.382340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:52.093 [2024-07-11 18:26:38.382352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:52.093 [2024-07-11 18:26:38.382364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:52.093 [2024-07-11 18:26:38.382379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:52.093 [2024-07-11 18:26:38.382392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:52.093 [2024-07-11 18:26:38.382403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:52.093 [2024-07-11 18:26:38.382415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:52.093 [2024-07-11 18:26:38.382427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:52.093 [2024-07-11 18:26:38.382439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:52.093 [2024-07-11 18:26:38.382451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:52.093 [2024-07-11 18:26:38.382463] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.093 [2024-07-11 18:26:38.382485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.093 [2024-07-11 18:26:38.382515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.093 [2024-07-11 18:26:38.382527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.093 [2024-07-11 18:26:38.382539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.093 [2024-07-11 18:26:38.382551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.093 [2024-07-11 18:26:38.382564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.382585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.093 [2024-07-11 18:26:38.382604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:19:52.093 [2024-07-11 18:26:38.382618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.400315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.400387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.093 [2024-07-11 18:26:38.400414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.603 ms 00:19:52.093 [2024-07-11 18:26:38.400430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.400608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.400651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.093 [2024-07-11 18:26:38.400678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:52.093 [2024-07-11 18:26:38.400701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.410625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.410716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.093 [2024-07-11 18:26:38.410739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.797 ms 00:19:52.093 [2024-07-11 18:26:38.410755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.410842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.410875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.093 [2024-07-11 18:26:38.410900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.093 [2024-07-11 18:26:38.410915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.411449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.411475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.093 [2024-07-11 18:26:38.411490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:19:52.093 [2024-07-11 18:26:38.411507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.411710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.411736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.093 [2024-07-11 18:26:38.411757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:19:52.093 [2024-07-11 18:26:38.411772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.416973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.417011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.093 [2024-07-11 18:26:38.417043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.173 ms 00:19:52.093 [2024-07-11 18:26:38.417066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.419785] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:52.093 [2024-07-11 18:26:38.419834] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.093 [2024-07-11 18:26:38.419882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.419894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.093 [2024-07-11 18:26:38.419905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.664 ms 00:19:52.093 [2024-07-11 18:26:38.419918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.093 [2024-07-11 18:26:38.434369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.093 [2024-07-11 18:26:38.434562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.093 [2024-07-11 18:26:38.434589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.408 ms 00:19:52.093 [2024-07-11 18:26:38.434601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.436683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.436725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.094 [2024-07-11 18:26:38.436740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:19:52.094 [2024-07-11 18:26:38.436751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.438453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.438493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.094 [2024-07-11 18:26:38.438523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:19:52.094 [2024-07-11 18:26:38.438534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.438959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.438987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.094 [2024-07-11 18:26:38.439003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:19:52.094 [2024-07-11 18:26:38.439024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.455047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.455165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.094 [2024-07-11 18:26:38.455203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.986 ms 00:19:52.094 [2024-07-11 18:26:38.455215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.462773] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:52.094 [2024-07-11 18:26:38.465357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.465394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.094 [2024-07-11 18:26:38.465427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.064 ms 00:19:52.094 [2024-07-11 18:26:38.465438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.465522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.465540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.094 [2024-07-11 18:26:38.465562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:52.094 [2024-07-11 18:26:38.465573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.465676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.465694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.094 [2024-07-11 18:26:38.465706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:52.094 [2024-07-11 18:26:38.465716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.465746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.465760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.094 [2024-07-11 18:26:38.465770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:52.094 [2024-07-11 18:26:38.465780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.465817] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.094 [2024-07-11 18:26:38.465845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.465861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.094 [2024-07-11 18:26:38.465872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:52.094 [2024-07-11 18:26:38.465882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.469698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.469743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.094 [2024-07-11 18:26:38.469760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 00:19:52.094 [2024-07-11 18:26:38.469787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.469886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.094 [2024-07-11 18:26:38.469906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.094 [2024-07-11 18:26:38.469940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:52.094 [2024-07-11 18:26:38.469952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.094 [2024-07-11 18:26:38.471480] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.550 ms, result 0 00:20:34.858  Copying: 22/1024 [MB] (22 MBps) Copying: 45/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (24 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 115/1024 [MB] (22 MBps) Copying: 140/1024 [MB] (24 MBps) Copying: 165/1024 [MB] (24 MBps) Copying: 190/1024 [MB] (24 MBps) Copying: 214/1024 [MB] (24 MBps) Copying: 239/1024 [MB] (24 MBps) Copying: 263/1024 [MB] (24 MBps) Copying: 287/1024 [MB] (24 MBps) Copying: 311/1024 [MB] (23 MBps) Copying: 335/1024 [MB] (24 MBps) Copying: 360/1024 [MB] (24 MBps) Copying: 384/1024 [MB] (23 MBps) Copying: 407/1024 [MB] (23 MBps) Copying: 431/1024 [MB] (23 MBps) Copying: 455/1024 [MB] (23 MBps) Copying: 479/1024 [MB] (23 MBps) Copying: 502/1024 [MB] (23 MBps) Copying: 527/1024 [MB] (25 MBps) Copying: 552/1024 [MB] (24 MBps) Copying: 577/1024 [MB] (24 MBps) Copying: 601/1024 [MB] (23 MBps) Copying: 626/1024 [MB] (24 MBps) Copying: 649/1024 [MB] (23 MBps) Copying: 674/1024 [MB] (24 MBps) Copying: 698/1024 [MB] (24 MBps) Copying: 723/1024 [MB] (24 MBps) Copying: 747/1024 [MB] (24 MBps) Copying: 771/1024 [MB] (23 MBps) Copying: 795/1024 [MB] (24 MBps) Copying: 819/1024 [MB] (23 MBps) Copying: 843/1024 [MB] (24 MBps) Copying: 868/1024 [MB] (24 MBps) Copying: 893/1024 [MB] (24 MBps) Copying: 917/1024 [MB] (24 MBps) Copying: 941/1024 [MB] (24 MBps) Copying: 965/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (24 MBps) Copying: 1014/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-11 18:27:21.031810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.031884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:34.858 [2024-07-11 18:27:21.031906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:34.858 [2024-07-11 18:27:21.031918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.031947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:34.858 [2024-07-11 18:27:21.032389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.032411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:34.858 [2024-07-11 18:27:21.032424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:20:34.858 [2024-07-11 18:27:21.032435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.032652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.032670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:34.858 [2024-07-11 18:27:21.032689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:34.858 [2024-07-11 18:27:21.032700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.036136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.036180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:34.858 [2024-07-11 18:27:21.036195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:20:34.858 [2024-07-11 18:27:21.036207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.042879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.042935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:34.858 [2024-07-11 18:27:21.042950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.650 ms 00:20:34.858 [2024-07-11 18:27:21.042968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.044327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.044363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:34.858 [2024-07-11 18:27:21.044393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.295 ms 00:20:34.858 [2024-07-11 18:27:21.044403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.047754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.047793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:34.858 [2024-07-11 18:27:21.047824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:20:34.858 [2024-07-11 18:27:21.047835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.047975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.047993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:34.858 [2024-07-11 18:27:21.048010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:34.858 [2024-07-11 18:27:21.048020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.050153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.050246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:34.858 [2024-07-11 18:27:21.050262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:20:34.858 [2024-07-11 18:27:21.050271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.051863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.051928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:34.858 [2024-07-11 18:27:21.051958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:20:34.858 [2024-07-11 18:27:21.051967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.053313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.053347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:34.858 [2024-07-11 18:27:21.053378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:20:34.858 [2024-07-11 18:27:21.053404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.054552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.858 [2024-07-11 18:27:21.054590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:34.858 [2024-07-11 18:27:21.054622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:20:34.858 [2024-07-11 18:27:21.054632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.858 [2024-07-11 18:27:21.054669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:34.859 [2024-07-11 18:27:21.054690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.054989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:34.859 [2024-07-11 18:27:21.055981] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:34.859 [2024-07-11 18:27:21.055992] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2261f664-bc0a-4247-828d-02457d49b5ee 00:20:34.859 [2024-07-11 18:27:21.056003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:34.859 [2024-07-11 18:27:21.056015] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:34.859 [2024-07-11 18:27:21.056025] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:34.859 [2024-07-11 18:27:21.056035] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:34.859 [2024-07-11 18:27:21.056046] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:34.859 [2024-07-11 18:27:21.056061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:34.859 [2024-07-11 18:27:21.056071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:34.859 [2024-07-11 18:27:21.056081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:34.859 [2024-07-11 18:27:21.056090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:34.859 [2024-07-11 18:27:21.056100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.859 [2024-07-11 18:27:21.056111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:34.859 [2024-07-11 18:27:21.056122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:20:34.859 [2024-07-11 18:27:21.056132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.057814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.859 [2024-07-11 18:27:21.057943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:34.859 [2024-07-11 18:27:21.058054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:20:34.859 [2024-07-11 18:27:21.058132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.058282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.859 [2024-07-11 18:27:21.058378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:34.859 [2024-07-11 18:27:21.058479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:34.859 [2024-07-11 18:27:21.058601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.063311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.063465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.859 [2024-07-11 18:27:21.063573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.063628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.063772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.063880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.859 [2024-07-11 18:27:21.063980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.064110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.064291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.064402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.859 [2024-07-11 18:27:21.064542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.064645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.064716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.064775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.859 [2024-07-11 18:27:21.064880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.065011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.073210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.073526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:34.859 [2024-07-11 18:27:21.073674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.073840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.080859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.081095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:34.859 [2024-07-11 18:27:21.081124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.081137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.081215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.081232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.859 [2024-07-11 18:27:21.081259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.081271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.081306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.081321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.859 [2024-07-11 18:27:21.081332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.081359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.081464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.081490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.859 [2024-07-11 18:27:21.081503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.081514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.081564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.081601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:34.859 [2024-07-11 18:27:21.081613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.081624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.081666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.081681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.859 [2024-07-11 18:27:21.081692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.081728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.081782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.859 [2024-07-11 18:27:21.081797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.859 [2024-07-11 18:27:21.081809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.859 [2024-07-11 18:27:21.081820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.859 [2024-07-11 18:27:21.081952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.107 ms, result 0 00:20:35.125 00:20:35.125 00:20:35.125 18:27:21 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:37.035 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:37.035 18:27:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:37.035 [2024-07-11 18:27:23.447654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:37.035 [2024-07-11 18:27:23.447872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92338 ] 00:20:37.294 [2024-07-11 18:27:23.594156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.294 [2024-07-11 18:27:23.637536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.552 [2024-07-11 18:27:23.727205] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:37.552 [2024-07-11 18:27:23.727355] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:37.552 [2024-07-11 18:27:23.882913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.883002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:37.552 [2024-07-11 18:27:23.883039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.552 [2024-07-11 18:27:23.883051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.552 [2024-07-11 18:27:23.883160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.883187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.552 [2024-07-11 18:27:23.883204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:37.552 [2024-07-11 18:27:23.883216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.552 [2024-07-11 18:27:23.883278] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:37.552 [2024-07-11 18:27:23.883560] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:37.552 [2024-07-11 18:27:23.883587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.883599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.552 [2024-07-11 18:27:23.883619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:20:37.552 [2024-07-11 18:27:23.883649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.552 [2024-07-11 18:27:23.884796] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:37.552 [2024-07-11 18:27:23.886992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.887044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:37.552 [2024-07-11 18:27:23.887067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:20:37.552 [2024-07-11 18:27:23.887095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.552 [2024-07-11 18:27:23.887168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.887188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:37.552 [2024-07-11 18:27:23.887201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:37.552 [2024-07-11 18:27:23.887213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.552 [2024-07-11 18:27:23.891738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.891776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.552 [2024-07-11 18:27:23.891807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.414 ms 00:20:37.552 [2024-07-11 18:27:23.891831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.552 [2024-07-11 18:27:23.891945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.891964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.552 [2024-07-11 18:27:23.891976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:37.552 [2024-07-11 18:27:23.891986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.552 [2024-07-11 18:27:23.892073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.552 [2024-07-11 18:27:23.892097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:37.552 [2024-07-11 18:27:23.892116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:37.552 [2024-07-11 18:27:23.892147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.553 [2024-07-11 18:27:23.892182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:37.553 [2024-07-11 18:27:23.893530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.553 [2024-07-11 18:27:23.893564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.553 [2024-07-11 18:27:23.893578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:20:37.553 [2024-07-11 18:27:23.893588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.553 [2024-07-11 18:27:23.893631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.553 [2024-07-11 18:27:23.893648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:37.553 [2024-07-11 18:27:23.893668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:37.553 [2024-07-11 18:27:23.893678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.553 [2024-07-11 18:27:23.893702] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:37.553 [2024-07-11 18:27:23.893727] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:37.553 [2024-07-11 18:27:23.893778] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:37.553 [2024-07-11 18:27:23.893799] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:37.553 [2024-07-11 18:27:23.893907] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:37.553 [2024-07-11 18:27:23.893922] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:37.553 [2024-07-11 18:27:23.893936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:37.553 [2024-07-11 18:27:23.893949] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:37.553 [2024-07-11 18:27:23.893961] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:37.553 [2024-07-11 18:27:23.893971] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:37.553 [2024-07-11 18:27:23.893981] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:37.553 [2024-07-11 18:27:23.893991] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:37.553 [2024-07-11 18:27:23.894000] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:37.553 [2024-07-11 18:27:23.894011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.553 [2024-07-11 18:27:23.894021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:37.553 [2024-07-11 18:27:23.894035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:37.553 [2024-07-11 18:27:23.894045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.553 [2024-07-11 18:27:23.894161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.553 [2024-07-11 18:27:23.894201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:37.553 [2024-07-11 18:27:23.894225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:37.553 [2024-07-11 18:27:23.894237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.553 [2024-07-11 18:27:23.894335] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:37.553 [2024-07-11 18:27:23.894351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:37.553 [2024-07-11 18:27:23.894363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:37.553 [2024-07-11 18:27:23.894447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:37.553 [2024-07-11 18:27:23.894480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894491] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.553 [2024-07-11 18:27:23.894515] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:37.553 [2024-07-11 18:27:23.894525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:37.553 [2024-07-11 18:27:23.894551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:37.553 [2024-07-11 18:27:23.894561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:37.553 [2024-07-11 18:27:23.894576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:37.553 [2024-07-11 18:27:23.894588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:37.553 [2024-07-11 18:27:23.894608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:37.553 [2024-07-11 18:27:23.894641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:37.553 [2024-07-11 18:27:23.894671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894681] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:37.553 [2024-07-11 18:27:23.894702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:37.553 [2024-07-11 18:27:23.894732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894758] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:37.553 [2024-07-11 18:27:23.894769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.553 [2024-07-11 18:27:23.894789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:37.553 [2024-07-11 18:27:23.894799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:37.553 [2024-07-11 18:27:23.894809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:37.553 [2024-07-11 18:27:23.894819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:37.553 [2024-07-11 18:27:23.894829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:37.553 [2024-07-11 18:27:23.894839] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:37.553 [2024-07-11 18:27:23.894860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:37.553 [2024-07-11 18:27:23.894870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894879] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:37.553 [2024-07-11 18:27:23.894891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:37.553 [2024-07-11 18:27:23.894901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:37.553 [2024-07-11 18:27:23.894948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:37.553 [2024-07-11 18:27:23.894971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:37.553 [2024-07-11 18:27:23.894991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:37.553 [2024-07-11 18:27:23.895009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:37.553 [2024-07-11 18:27:23.895028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:37.553 [2024-07-11 18:27:23.895045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:37.553 [2024-07-11 18:27:23.895068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:37.553 [2024-07-11 18:27:23.895081] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:37.553 [2024-07-11 18:27:23.895127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.553 [2024-07-11 18:27:23.895155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:37.553 [2024-07-11 18:27:23.895167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:37.553 [2024-07-11 18:27:23.895179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:37.553 [2024-07-11 18:27:23.895190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:37.553 [2024-07-11 18:27:23.895202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:37.553 [2024-07-11 18:27:23.895213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:37.553 [2024-07-11 18:27:23.895225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:37.553 [2024-07-11 18:27:23.895240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:37.553 [2024-07-11 18:27:23.895267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:37.553 [2024-07-11 18:27:23.895279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:37.553 [2024-07-11 18:27:23.895290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:37.553 [2024-07-11 18:27:23.895315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:37.553 [2024-07-11 18:27:23.895326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:37.553 [2024-07-11 18:27:23.895337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:37.553 [2024-07-11 18:27:23.895359] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:37.553 [2024-07-11 18:27:23.895371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:37.553 [2024-07-11 18:27:23.895393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:37.553 [2024-07-11 18:27:23.895405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:37.553 [2024-07-11 18:27:23.895417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:37.553 [2024-07-11 18:27:23.895428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:37.553 [2024-07-11 18:27:23.895441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.895452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:37.554 [2024-07-11 18:27:23.895468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:20:37.554 [2024-07-11 18:27:23.895483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.912847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.912903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.554 [2024-07-11 18:27:23.912938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 00:20:37.554 [2024-07-11 18:27:23.912961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.913068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.913087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:37.554 [2024-07-11 18:27:23.913121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:37.554 [2024-07-11 18:27:23.913137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.920734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.920787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.554 [2024-07-11 18:27:23.920819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.496 ms 00:20:37.554 [2024-07-11 18:27:23.920830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.920880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.920894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.554 [2024-07-11 18:27:23.920912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:37.554 [2024-07-11 18:27:23.920922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.921267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.921286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.554 [2024-07-11 18:27:23.921308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:37.554 [2024-07-11 18:27:23.921319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.921449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.921465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.554 [2024-07-11 18:27:23.921476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:37.554 [2024-07-11 18:27:23.921489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.926064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.926111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.554 [2024-07-11 18:27:23.926142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.549 ms 00:20:37.554 [2024-07-11 18:27:23.926152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.928532] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:37.554 [2024-07-11 18:27:23.928589] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:37.554 [2024-07-11 18:27:23.928623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.928634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:37.554 [2024-07-11 18:27:23.928644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.346 ms 00:20:37.554 [2024-07-11 18:27:23.928658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.944255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.944296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:37.554 [2024-07-11 18:27:23.944329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.553 ms 00:20:37.554 [2024-07-11 18:27:23.944353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.946335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.946374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:37.554 [2024-07-11 18:27:23.946405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:20:37.554 [2024-07-11 18:27:23.946416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.948168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.948240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:37.554 [2024-07-11 18:27:23.948272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:20:37.554 [2024-07-11 18:27:23.948283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.554 [2024-07-11 18:27:23.948707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.554 [2024-07-11 18:27:23.948734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:37.554 [2024-07-11 18:27:23.948748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:20:37.554 [2024-07-11 18:27:23.948763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.966370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.966437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:37.813 [2024-07-11 18:27:23.966488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.571 ms 00:20:37.813 [2024-07-11 18:27:23.966500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.974871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:37.813 [2024-07-11 18:27:23.977469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.977521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:37.813 [2024-07-11 18:27:23.977564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.884 ms 00:20:37.813 [2024-07-11 18:27:23.977583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.977660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.977678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:37.813 [2024-07-11 18:27:23.977692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:37.813 [2024-07-11 18:27:23.977702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.977821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.977839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:37.813 [2024-07-11 18:27:23.977851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:37.813 [2024-07-11 18:27:23.977861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.977890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.977904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:37.813 [2024-07-11 18:27:23.977916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:37.813 [2024-07-11 18:27:23.977926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.977964] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:37.813 [2024-07-11 18:27:23.977991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.978006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:37.813 [2024-07-11 18:27:23.978024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:37.813 [2024-07-11 18:27:23.978035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.981721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.981777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:37.813 [2024-07-11 18:27:23.981821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:20:37.813 [2024-07-11 18:27:23.981833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.981912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.813 [2024-07-11 18:27:23.981929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:37.813 [2024-07-11 18:27:23.981947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:37.813 [2024-07-11 18:27:23.981958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.813 [2024-07-11 18:27:23.983147] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.729 ms, result 0 00:21:20.904  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (24 MBps) Copying: 166/1024 [MB] (24 MBps) Copying: 190/1024 [MB] (24 MBps) Copying: 214/1024 [MB] (23 MBps) Copying: 238/1024 [MB] (23 MBps) Copying: 262/1024 [MB] (23 MBps) Copying: 286/1024 [MB] (23 MBps) Copying: 310/1024 [MB] (23 MBps) Copying: 333/1024 [MB] (23 MBps) Copying: 357/1024 [MB] (24 MBps) Copying: 381/1024 [MB] (24 MBps) Copying: 405/1024 [MB] (24 MBps) Copying: 430/1024 [MB] (24 MBps) Copying: 454/1024 [MB] (24 MBps) Copying: 479/1024 [MB] (24 MBps) Copying: 503/1024 [MB] (24 MBps) Copying: 528/1024 [MB] (24 MBps) Copying: 553/1024 [MB] (24 MBps) Copying: 577/1024 [MB] (24 MBps) Copying: 602/1024 [MB] (24 MBps) Copying: 626/1024 [MB] (24 MBps) Copying: 651/1024 [MB] (24 MBps) Copying: 676/1024 [MB] (24 MBps) Copying: 701/1024 [MB] (24 MBps) Copying: 726/1024 [MB] (24 MBps) Copying: 750/1024 [MB] (24 MBps) Copying: 775/1024 [MB] (25 MBps) Copying: 800/1024 [MB] (24 MBps) Copying: 824/1024 [MB] (24 MBps) Copying: 849/1024 [MB] (24 MBps) Copying: 873/1024 [MB] (24 MBps) Copying: 897/1024 [MB] (24 MBps) Copying: 922/1024 [MB] (24 MBps) Copying: 946/1024 [MB] (24 MBps) Copying: 971/1024 [MB] (24 MBps) Copying: 995/1024 [MB] (24 MBps) Copying: 1019/1024 [MB] (24 MBps) Copying: 1048332/1048576 [kB] (3996 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 18:28:07.285135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.904 [2024-07-11 18:28:07.285388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:20.904 [2024-07-11 18:28:07.285540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:20.904 [2024-07-11 18:28:07.285592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.904 [2024-07-11 18:28:07.286727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:20.904 [2024-07-11 18:28:07.289715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.904 [2024-07-11 18:28:07.289943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:20.904 [2024-07-11 18:28:07.289970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.785 ms 00:21:20.904 [2024-07-11 18:28:07.289982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.904 [2024-07-11 18:28:07.303540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.904 [2024-07-11 18:28:07.303592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:20.904 [2024-07-11 18:28:07.303626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.090 ms 00:21:20.904 [2024-07-11 18:28:07.303638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.164 [2024-07-11 18:28:07.325910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.164 [2024-07-11 18:28:07.325951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:21.164 [2024-07-11 18:28:07.325995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.251 ms 00:21:21.164 [2024-07-11 18:28:07.326017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.164 [2024-07-11 18:28:07.332197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.164 [2024-07-11 18:28:07.332226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:21.165 [2024-07-11 18:28:07.332257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.141 ms 00:21:21.165 [2024-07-11 18:28:07.332267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.333508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.165 [2024-07-11 18:28:07.333562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:21.165 [2024-07-11 18:28:07.333577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:21:21.165 [2024-07-11 18:28:07.333587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.336707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.165 [2024-07-11 18:28:07.336744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:21.165 [2024-07-11 18:28:07.336796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:21:21.165 [2024-07-11 18:28:07.336807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.444296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.165 [2024-07-11 18:28:07.444344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:21.165 [2024-07-11 18:28:07.444362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.451 ms 00:21:21.165 [2024-07-11 18:28:07.444374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.446136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.165 [2024-07-11 18:28:07.446217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:21.165 [2024-07-11 18:28:07.446234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:21:21.165 [2024-07-11 18:28:07.446245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.447642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.165 [2024-07-11 18:28:07.447706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:21.165 [2024-07-11 18:28:07.447734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:21:21.165 [2024-07-11 18:28:07.447744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.449004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.165 [2024-07-11 18:28:07.449073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:21.165 [2024-07-11 18:28:07.449137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:21:21.165 [2024-07-11 18:28:07.449148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.450345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.165 [2024-07-11 18:28:07.450382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:21.165 [2024-07-11 18:28:07.450396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.133 ms 00:21:21.165 [2024-07-11 18:28:07.450407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.165 [2024-07-11 18:28:07.450459] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:21.165 [2024-07-11 18:28:07.450522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118784 / 261120 wr_cnt: 1 state: open 00:21:21.165 [2024-07-11 18:28:07.450544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.450991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:21.165 [2024-07-11 18:28:07.451409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:21.166 [2024-07-11 18:28:07.451815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:21.166 [2024-07-11 18:28:07.451844] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2261f664-bc0a-4247-828d-02457d49b5ee 00:21:21.166 [2024-07-11 18:28:07.451865] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118784 00:21:21.166 [2024-07-11 18:28:07.451876] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119744 00:21:21.166 [2024-07-11 18:28:07.451887] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118784 00:21:21.166 [2024-07-11 18:28:07.451898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:21:21.166 [2024-07-11 18:28:07.451915] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:21.166 [2024-07-11 18:28:07.451926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:21.166 [2024-07-11 18:28:07.451937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:21.166 [2024-07-11 18:28:07.451947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:21.166 [2024-07-11 18:28:07.451956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:21.166 [2024-07-11 18:28:07.451967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.166 [2024-07-11 18:28:07.451978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:21.166 [2024-07-11 18:28:07.451989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:21:21.166 [2024-07-11 18:28:07.452008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.453526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.166 [2024-07-11 18:28:07.453552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:21.166 [2024-07-11 18:28:07.453564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.495 ms 00:21:21.166 [2024-07-11 18:28:07.453573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.453687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.166 [2024-07-11 18:28:07.453705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:21.166 [2024-07-11 18:28:07.453720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:21:21.166 [2024-07-11 18:28:07.453729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.458356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.458537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.166 [2024-07-11 18:28:07.458642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.458688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.458865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.458913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.166 [2024-07-11 18:28:07.458962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.459044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.459174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.459232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.166 [2024-07-11 18:28:07.459274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.459312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.459363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.459456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.166 [2024-07-11 18:28:07.459493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.459536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.467566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.467788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.166 [2024-07-11 18:28:07.467926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.467975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.474741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.474940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.166 [2024-07-11 18:28:07.475069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.475246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.475320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.475431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:21.166 [2024-07-11 18:28:07.475536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.475556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.475621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.475637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:21.166 [2024-07-11 18:28:07.475648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.475658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.475887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.475905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:21.166 [2024-07-11 18:28:07.475916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.475926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.475992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.476009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:21.166 [2024-07-11 18:28:07.476021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.476031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.476159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.476181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:21.166 [2024-07-11 18:28:07.476194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.476206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.476281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.166 [2024-07-11 18:28:07.476312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:21.166 [2024-07-11 18:28:07.476325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.166 [2024-07-11 18:28:07.476336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.166 [2024-07-11 18:28:07.476534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 193.691 ms, result 0 00:21:22.101 00:21:22.101 00:21:22.101 18:28:08 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:22.101 [2024-07-11 18:28:08.278206] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:22.101 [2024-07-11 18:28:08.278386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92790 ] 00:21:22.101 [2024-07-11 18:28:08.426873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.101 [2024-07-11 18:28:08.461203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.362 [2024-07-11 18:28:08.543783] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:22.362 [2024-07-11 18:28:08.543889] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:22.362 [2024-07-11 18:28:08.700551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.700618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:22.362 [2024-07-11 18:28:08.700638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.362 [2024-07-11 18:28:08.700648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.700710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.700730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.362 [2024-07-11 18:28:08.700744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:22.362 [2024-07-11 18:28:08.700753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.700780] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:22.362 [2024-07-11 18:28:08.701030] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:22.362 [2024-07-11 18:28:08.701055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.701065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.362 [2024-07-11 18:28:08.701076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:21:22.362 [2024-07-11 18:28:08.701086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.702322] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:22.362 [2024-07-11 18:28:08.704412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.704450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:22.362 [2024-07-11 18:28:08.704494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:21:22.362 [2024-07-11 18:28:08.704504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.704571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.704595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:22.362 [2024-07-11 18:28:08.704606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:22.362 [2024-07-11 18:28:08.704616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.708894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.708929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.362 [2024-07-11 18:28:08.708968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:21:22.362 [2024-07-11 18:28:08.708985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.709078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.709107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.362 [2024-07-11 18:28:08.709144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:22.362 [2024-07-11 18:28:08.709155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.709235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.709252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:22.362 [2024-07-11 18:28:08.709275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:22.362 [2024-07-11 18:28:08.709292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.709328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:22.362 [2024-07-11 18:28:08.710619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.710652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.362 [2024-07-11 18:28:08.710681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:21:22.362 [2024-07-11 18:28:08.710706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.710757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.362 [2024-07-11 18:28:08.710773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:22.362 [2024-07-11 18:28:08.710784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:22.362 [2024-07-11 18:28:08.710793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.362 [2024-07-11 18:28:08.710818] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:22.362 [2024-07-11 18:28:08.710841] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:22.362 [2024-07-11 18:28:08.710889] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:22.362 [2024-07-11 18:28:08.710910] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:22.362 [2024-07-11 18:28:08.711027] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:22.362 [2024-07-11 18:28:08.711059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:22.362 [2024-07-11 18:28:08.711088] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:22.362 [2024-07-11 18:28:08.711103] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:22.362 [2024-07-11 18:28:08.711152] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:22.362 [2024-07-11 18:28:08.711164] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:22.363 [2024-07-11 18:28:08.711174] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:22.363 [2024-07-11 18:28:08.711184] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:22.363 [2024-07-11 18:28:08.711195] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:22.363 [2024-07-11 18:28:08.711229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.711256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:22.363 [2024-07-11 18:28:08.711272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:21:22.363 [2024-07-11 18:28:08.711282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.363 [2024-07-11 18:28:08.711378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.711391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:22.363 [2024-07-11 18:28:08.711402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:22.363 [2024-07-11 18:28:08.711432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.363 [2024-07-11 18:28:08.711598] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:22.363 [2024-07-11 18:28:08.711631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:22.363 [2024-07-11 18:28:08.711643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.363 [2024-07-11 18:28:08.711660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:22.363 [2024-07-11 18:28:08.711681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:22.363 [2024-07-11 18:28:08.711702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:22.363 [2024-07-11 18:28:08.711712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.363 [2024-07-11 18:28:08.711732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:22.363 [2024-07-11 18:28:08.711743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:22.363 [2024-07-11 18:28:08.711752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:22.363 [2024-07-11 18:28:08.711763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:22.363 [2024-07-11 18:28:08.711773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:22.363 [2024-07-11 18:28:08.711783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:22.363 [2024-07-11 18:28:08.711822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:22.363 [2024-07-11 18:28:08.711832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:22.363 [2024-07-11 18:28:08.711851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.363 [2024-07-11 18:28:08.711871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:22.363 [2024-07-11 18:28:08.711881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.363 [2024-07-11 18:28:08.711900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:22.363 [2024-07-11 18:28:08.711909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.363 [2024-07-11 18:28:08.711929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:22.363 [2024-07-11 18:28:08.711939] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:22.363 [2024-07-11 18:28:08.711958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:22.363 [2024-07-11 18:28:08.711970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:22.363 [2024-07-11 18:28:08.711980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.363 [2024-07-11 18:28:08.711990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:22.363 [2024-07-11 18:28:08.711999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:22.363 [2024-07-11 18:28:08.712009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:22.363 [2024-07-11 18:28:08.712019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:22.363 [2024-07-11 18:28:08.712029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:22.363 [2024-07-11 18:28:08.712039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.363 [2024-07-11 18:28:08.712048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:22.363 [2024-07-11 18:28:08.712058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:22.363 [2024-07-11 18:28:08.712067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.363 [2024-07-11 18:28:08.712077] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:22.363 [2024-07-11 18:28:08.712088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:22.363 [2024-07-11 18:28:08.712098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:22.363 [2024-07-11 18:28:08.712108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:22.363 [2024-07-11 18:28:08.712118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:22.363 [2024-07-11 18:28:08.712131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:22.363 [2024-07-11 18:28:08.712141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:22.363 [2024-07-11 18:28:08.712152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:22.363 [2024-07-11 18:28:08.712162] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:22.363 [2024-07-11 18:28:08.712172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:22.363 [2024-07-11 18:28:08.712183] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:22.363 [2024-07-11 18:28:08.712196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.363 [2024-07-11 18:28:08.712223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:22.363 [2024-07-11 18:28:08.712235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:22.363 [2024-07-11 18:28:08.712247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:22.363 [2024-07-11 18:28:08.712257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:22.363 [2024-07-11 18:28:08.712269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:22.363 [2024-07-11 18:28:08.712280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:22.363 [2024-07-11 18:28:08.712291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:22.363 [2024-07-11 18:28:08.712301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:22.363 [2024-07-11 18:28:08.712312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:22.363 [2024-07-11 18:28:08.712326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:22.363 [2024-07-11 18:28:08.712338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:22.363 [2024-07-11 18:28:08.712348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:22.363 [2024-07-11 18:28:08.712359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:22.363 [2024-07-11 18:28:08.712370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:22.363 [2024-07-11 18:28:08.712382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:22.363 [2024-07-11 18:28:08.712395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:22.363 [2024-07-11 18:28:08.712419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:22.363 [2024-07-11 18:28:08.712430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:22.363 [2024-07-11 18:28:08.712441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:22.363 [2024-07-11 18:28:08.712453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:22.363 [2024-07-11 18:28:08.712465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.712475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:22.363 [2024-07-11 18:28:08.712497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:21:22.363 [2024-07-11 18:28:08.712508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.363 [2024-07-11 18:28:08.732227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.732293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.363 [2024-07-11 18:28:08.732318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.661 ms 00:21:22.363 [2024-07-11 18:28:08.732334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.363 [2024-07-11 18:28:08.732488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.732523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:22.363 [2024-07-11 18:28:08.732550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:22.363 [2024-07-11 18:28:08.732573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.363 [2024-07-11 18:28:08.742049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.742121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.363 [2024-07-11 18:28:08.742144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.368 ms 00:21:22.363 [2024-07-11 18:28:08.742158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.363 [2024-07-11 18:28:08.742222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.742243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.363 [2024-07-11 18:28:08.742267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.363 [2024-07-11 18:28:08.742281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.363 [2024-07-11 18:28:08.742701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.363 [2024-07-11 18:28:08.742751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.364 [2024-07-11 18:28:08.742770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:21:22.364 [2024-07-11 18:28:08.742784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.364 [2024-07-11 18:28:08.743013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.364 [2024-07-11 18:28:08.743056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.364 [2024-07-11 18:28:08.743106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:21:22.364 [2024-07-11 18:28:08.743126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.364 [2024-07-11 18:28:08.747748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.364 [2024-07-11 18:28:08.747784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.364 [2024-07-11 18:28:08.747815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.559 ms 00:21:22.364 [2024-07-11 18:28:08.747840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.364 [2024-07-11 18:28:08.750200] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:22.364 [2024-07-11 18:28:08.750241] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:22.364 [2024-07-11 18:28:08.750272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.364 [2024-07-11 18:28:08.750283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:22.364 [2024-07-11 18:28:08.750294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.327 ms 00:21:22.364 [2024-07-11 18:28:08.750303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.364 [2024-07-11 18:28:08.764364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.364 [2024-07-11 18:28:08.764401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:22.364 [2024-07-11 18:28:08.764433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:21:22.364 [2024-07-11 18:28:08.764443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.364 [2024-07-11 18:28:08.766277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.364 [2024-07-11 18:28:08.766318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:22.364 [2024-07-11 18:28:08.766348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:21:22.364 [2024-07-11 18:28:08.766358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.364 [2024-07-11 18:28:08.768018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.364 [2024-07-11 18:28:08.768054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:22.364 [2024-07-11 18:28:08.768083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:21:22.364 [2024-07-11 18:28:08.768107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.364 [2024-07-11 18:28:08.768508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.364 [2024-07-11 18:28:08.768534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:22.364 [2024-07-11 18:28:08.768548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:21:22.364 [2024-07-11 18:28:08.768563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.623 [2024-07-11 18:28:08.784526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.623 [2024-07-11 18:28:08.784613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:22.623 [2024-07-11 18:28:08.784633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.940 ms 00:21:22.623 [2024-07-11 18:28:08.784644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.792206] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:22.624 [2024-07-11 18:28:08.794399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.624 [2024-07-11 18:28:08.794434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:22.624 [2024-07-11 18:28:08.794479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.698 ms 00:21:22.624 [2024-07-11 18:28:08.794489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.794553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.624 [2024-07-11 18:28:08.794570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:22.624 [2024-07-11 18:28:08.794582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:22.624 [2024-07-11 18:28:08.794592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.796342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.624 [2024-07-11 18:28:08.796383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:22.624 [2024-07-11 18:28:08.796397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.689 ms 00:21:22.624 [2024-07-11 18:28:08.796406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.796456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.624 [2024-07-11 18:28:08.796472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:22.624 [2024-07-11 18:28:08.796483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:22.624 [2024-07-11 18:28:08.796492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.796543] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:22.624 [2024-07-11 18:28:08.796565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.624 [2024-07-11 18:28:08.796589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:22.624 [2024-07-11 18:28:08.796600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:22.624 [2024-07-11 18:28:08.796609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.800057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.624 [2024-07-11 18:28:08.800119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:22.624 [2024-07-11 18:28:08.800135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:21:22.624 [2024-07-11 18:28:08.800145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.800224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.624 [2024-07-11 18:28:08.800241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:22.624 [2024-07-11 18:28:08.800257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:22.624 [2024-07-11 18:28:08.800267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.624 [2024-07-11 18:28:08.807566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.528 ms, result 0 00:22:05.267  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 72/1024 [MB] (25 MBps) Copying: 97/1024 [MB] (24 MBps) Copying: 122/1024 [MB] (24 MBps) Copying: 147/1024 [MB] (25 MBps) Copying: 172/1024 [MB] (24 MBps) Copying: 197/1024 [MB] (25 MBps) Copying: 223/1024 [MB] (25 MBps) Copying: 247/1024 [MB] (24 MBps) Copying: 272/1024 [MB] (24 MBps) Copying: 297/1024 [MB] (25 MBps) Copying: 321/1024 [MB] (24 MBps) Copying: 346/1024 [MB] (24 MBps) Copying: 371/1024 [MB] (24 MBps) Copying: 395/1024 [MB] (24 MBps) Copying: 419/1024 [MB] (23 MBps) Copying: 443/1024 [MB] (24 MBps) Copying: 466/1024 [MB] (22 MBps) Copying: 489/1024 [MB] (22 MBps) Copying: 512/1024 [MB] (23 MBps) Copying: 535/1024 [MB] (23 MBps) Copying: 559/1024 [MB] (23 MBps) Copying: 582/1024 [MB] (23 MBps) Copying: 606/1024 [MB] (23 MBps) Copying: 629/1024 [MB] (23 MBps) Copying: 653/1024 [MB] (23 MBps) Copying: 677/1024 [MB] (23 MBps) Copying: 701/1024 [MB] (23 MBps) Copying: 724/1024 [MB] (23 MBps) Copying: 748/1024 [MB] (23 MBps) Copying: 772/1024 [MB] (24 MBps) Copying: 796/1024 [MB] (24 MBps) Copying: 820/1024 [MB] (23 MBps) Copying: 845/1024 [MB] (24 MBps) Copying: 869/1024 [MB] (24 MBps) Copying: 894/1024 [MB] (24 MBps) Copying: 918/1024 [MB] (24 MBps) Copying: 942/1024 [MB] (24 MBps) Copying: 967/1024 [MB] (24 MBps) Copying: 990/1024 [MB] (23 MBps) Copying: 1015/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-11 18:28:51.493689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.493775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.267 [2024-07-11 18:28:51.493814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:05.267 [2024-07-11 18:28:51.493830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.493870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.267 [2024-07-11 18:28:51.495383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.495432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.267 [2024-07-11 18:28:51.495461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:22:05.267 [2024-07-11 18:28:51.495494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.495818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.495850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.267 [2024-07-11 18:28:51.495879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:22:05.267 [2024-07-11 18:28:51.495895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.502483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.502729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.267 [2024-07-11 18:28:51.502914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.557 ms 00:22:05.267 [2024-07-11 18:28:51.503001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.510498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.510691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:05.267 [2024-07-11 18:28:51.510860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.240 ms 00:22:05.267 [2024-07-11 18:28:51.511032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.512458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.512660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.267 [2024-07-11 18:28:51.512827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:22:05.267 [2024-07-11 18:28:51.512955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.516212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.516384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.267 [2024-07-11 18:28:51.516569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.170 ms 00:22:05.267 [2024-07-11 18:28:51.516624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.636358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.636612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.267 [2024-07-11 18:28:51.636765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.560 ms 00:22:05.267 [2024-07-11 18:28:51.636915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.638916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.639162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:05.267 [2024-07-11 18:28:51.639305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.926 ms 00:22:05.267 [2024-07-11 18:28:51.639330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.267 [2024-07-11 18:28:51.640879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.267 [2024-07-11 18:28:51.641052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:05.267 [2024-07-11 18:28:51.641194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:22:05.268 [2024-07-11 18:28:51.641325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-07-11 18:28:51.642514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.268 [2024-07-11 18:28:51.642721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:05.268 [2024-07-11 18:28:51.642874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:22:05.268 [2024-07-11 18:28:51.642927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-07-11 18:28:51.644151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.268 [2024-07-11 18:28:51.644233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.268 [2024-07-11 18:28:51.644263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:22:05.268 [2024-07-11 18:28:51.644273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-07-11 18:28:51.644308] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.268 [2024-07-11 18:28:51.644329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:22:05.268 [2024-07-11 18:28:51.644343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.644996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.268 [2024-07-11 18:28:51.645405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-07-11 18:28:51.645576] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.269 [2024-07-11 18:28:51.645587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2261f664-bc0a-4247-828d-02457d49b5ee 00:22:05.269 [2024-07-11 18:28:51.645605] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:22:05.269 [2024-07-11 18:28:51.645616] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16064 00:22:05.269 [2024-07-11 18:28:51.645626] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15104 00:22:05.269 [2024-07-11 18:28:51.645637] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0636 00:22:05.269 [2024-07-11 18:28:51.645648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.269 [2024-07-11 18:28:51.645659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.269 [2024-07-11 18:28:51.645669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.269 [2024-07-11 18:28:51.645679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.269 [2024-07-11 18:28:51.645689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.269 [2024-07-11 18:28:51.645699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.269 [2024-07-11 18:28:51.645710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.269 [2024-07-11 18:28:51.645736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:22:05.269 [2024-07-11 18:28:51.645746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.647003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.269 [2024-07-11 18:28:51.647024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.269 [2024-07-11 18:28:51.647037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:22:05.269 [2024-07-11 18:28:51.647048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.647188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.269 [2024-07-11 18:28:51.647211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.269 [2024-07-11 18:28:51.647224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:05.269 [2024-07-11 18:28:51.647239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.652052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.652269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.269 [2024-07-11 18:28:51.652395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.652542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.652655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.652785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.269 [2024-07-11 18:28:51.652918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.652984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.653199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.653259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.269 [2024-07-11 18:28:51.653299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.653420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.653472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.653506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.269 [2024-07-11 18:28:51.653518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.653541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.661561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.661618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.269 [2024-07-11 18:28:51.661651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.661662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.668301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.668350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.269 [2024-07-11 18:28:51.668382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.668401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.668447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.668461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.269 [2024-07-11 18:28:51.668472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.668482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.668556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.668571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.269 [2024-07-11 18:28:51.668582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.668592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.668682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.668701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.269 [2024-07-11 18:28:51.668712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.668722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.668764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.668781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:05.269 [2024-07-11 18:28:51.668792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.668801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.668863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.668878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.269 [2024-07-11 18:28:51.668898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.668908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.668956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.269 [2024-07-11 18:28:51.668971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.269 [2024-07-11 18:28:51.668981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.269 [2024-07-11 18:28:51.668991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-07-11 18:28:51.669119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 175.420 ms, result 0 00:22:05.528 00:22:05.528 00:22:05.528 18:28:51 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:08.058 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:08.058 18:28:53 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:08.058 18:28:53 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:08.058 18:28:53 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 91221 00:22:08.058 Process with pid 91221 is not found 00:22:08.058 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 91221 ']' 00:22:08.058 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 91221 00:22:08.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (91221) - No such process 00:22:08.058 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 91221 is not found' 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:08.058 Remove shared memory files 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:08.058 18:28:54 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:08.058 ************************************ 00:22:08.058 END TEST ftl_restore 00:22:08.058 ************************************ 00:22:08.058 00:22:08.058 real 3m17.823s 00:22:08.058 user 3m4.869s 00:22:08.058 sys 0m14.707s 00:22:08.058 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.058 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:08.058 18:28:54 ftl -- common/autotest_common.sh@1142 -- # return 0 00:22:08.058 18:28:54 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:08.058 18:28:54 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:08.058 18:28:54 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.058 18:28:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:08.058 ************************************ 00:22:08.058 START TEST ftl_dirty_shutdown 00:22:08.058 ************************************ 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:08.058 * Looking for test storage... 00:22:08.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:08.058 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=93308 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 93308 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 93308 ']' 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:08.059 18:28:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.059 [2024-07-11 18:28:54.408127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:08.059 [2024-07-11 18:28:54.408294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93308 ] 00:22:08.318 [2024-07-11 18:28:54.561241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.318 [2024-07-11 18:28:54.604325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:08.912 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:09.518 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:09.519 { 00:22:09.519 "name": "nvme0n1", 00:22:09.519 "aliases": [ 00:22:09.519 "b06e76e0-5c9c-4b0a-8072-6ef2eae4b924" 00:22:09.519 ], 00:22:09.519 "product_name": "NVMe disk", 00:22:09.519 "block_size": 4096, 00:22:09.519 "num_blocks": 1310720, 00:22:09.519 "uuid": "b06e76e0-5c9c-4b0a-8072-6ef2eae4b924", 00:22:09.519 "assigned_rate_limits": { 00:22:09.519 "rw_ios_per_sec": 0, 00:22:09.519 "rw_mbytes_per_sec": 0, 00:22:09.519 "r_mbytes_per_sec": 0, 00:22:09.519 "w_mbytes_per_sec": 0 00:22:09.519 }, 00:22:09.519 "claimed": true, 00:22:09.519 "claim_type": "read_many_write_one", 00:22:09.519 "zoned": false, 00:22:09.519 "supported_io_types": { 00:22:09.519 "read": true, 00:22:09.519 "write": true, 00:22:09.519 "unmap": true, 00:22:09.519 "flush": true, 00:22:09.519 "reset": true, 00:22:09.519 "nvme_admin": true, 00:22:09.519 "nvme_io": true, 00:22:09.519 "nvme_io_md": false, 00:22:09.519 "write_zeroes": true, 00:22:09.519 "zcopy": false, 00:22:09.519 "get_zone_info": false, 00:22:09.519 "zone_management": false, 00:22:09.519 "zone_append": false, 00:22:09.519 "compare": true, 00:22:09.519 "compare_and_write": false, 00:22:09.519 "abort": true, 00:22:09.519 "seek_hole": false, 00:22:09.519 "seek_data": false, 00:22:09.519 "copy": true, 00:22:09.519 "nvme_iov_md": false 00:22:09.519 }, 00:22:09.519 "driver_specific": { 00:22:09.519 "nvme": [ 00:22:09.519 { 00:22:09.519 "pci_address": "0000:00:11.0", 00:22:09.519 "trid": { 00:22:09.519 "trtype": "PCIe", 00:22:09.519 "traddr": "0000:00:11.0" 00:22:09.519 }, 00:22:09.519 "ctrlr_data": { 00:22:09.519 "cntlid": 0, 00:22:09.519 "vendor_id": "0x1b36", 00:22:09.519 "model_number": "QEMU NVMe Ctrl", 00:22:09.519 "serial_number": "12341", 00:22:09.519 "firmware_revision": "8.0.0", 00:22:09.519 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:09.519 "oacs": { 00:22:09.519 "security": 0, 00:22:09.519 "format": 1, 00:22:09.519 "firmware": 0, 00:22:09.519 "ns_manage": 1 00:22:09.519 }, 00:22:09.519 "multi_ctrlr": false, 00:22:09.519 "ana_reporting": false 00:22:09.519 }, 00:22:09.519 "vs": { 00:22:09.519 "nvme_version": "1.4" 00:22:09.519 }, 00:22:09.519 "ns_data": { 00:22:09.519 "id": 1, 00:22:09.519 "can_share": false 00:22:09.519 } 00:22:09.519 } 00:22:09.519 ], 00:22:09.519 "mp_policy": "active_passive" 00:22:09.519 } 00:22:09.519 } 00:22:09.519 ]' 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:09.519 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:09.777 18:28:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:10.036 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=d806e8eb-8f79-4bb0-ac95-c3ec123a11b2 00:22:10.036 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:10.036 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d806e8eb-8f79-4bb0-ac95-c3ec123a11b2 00:22:10.295 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:10.554 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=49720080-7b36-4a84-be68-319b606f9891 00:22:10.554 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 49720080-7b36-4a84-be68-319b606f9891 00:22:10.811 18:28:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:10.811 18:28:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:10.811 18:28:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:10.811 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:10.811 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:10.811 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:10.811 18:28:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:10.811 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:10.811 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:10.811 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:10.811 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:10.811 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:10.811 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:11.069 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:11.069 { 00:22:11.069 "name": "ef2f7b62-2727-42e8-902f-1e515717ef14", 00:22:11.069 "aliases": [ 00:22:11.069 "lvs/nvme0n1p0" 00:22:11.069 ], 00:22:11.069 "product_name": "Logical Volume", 00:22:11.069 "block_size": 4096, 00:22:11.069 "num_blocks": 26476544, 00:22:11.069 "uuid": "ef2f7b62-2727-42e8-902f-1e515717ef14", 00:22:11.069 "assigned_rate_limits": { 00:22:11.069 "rw_ios_per_sec": 0, 00:22:11.069 "rw_mbytes_per_sec": 0, 00:22:11.069 "r_mbytes_per_sec": 0, 00:22:11.069 "w_mbytes_per_sec": 0 00:22:11.069 }, 00:22:11.069 "claimed": false, 00:22:11.069 "zoned": false, 00:22:11.069 "supported_io_types": { 00:22:11.069 "read": true, 00:22:11.069 "write": true, 00:22:11.069 "unmap": true, 00:22:11.069 "flush": false, 00:22:11.069 "reset": true, 00:22:11.069 "nvme_admin": false, 00:22:11.069 "nvme_io": false, 00:22:11.069 "nvme_io_md": false, 00:22:11.069 "write_zeroes": true, 00:22:11.069 "zcopy": false, 00:22:11.069 "get_zone_info": false, 00:22:11.069 "zone_management": false, 00:22:11.069 "zone_append": false, 00:22:11.069 "compare": false, 00:22:11.069 "compare_and_write": false, 00:22:11.069 "abort": false, 00:22:11.069 "seek_hole": true, 00:22:11.069 "seek_data": true, 00:22:11.069 "copy": false, 00:22:11.069 "nvme_iov_md": false 00:22:11.069 }, 00:22:11.069 "driver_specific": { 00:22:11.069 "lvol": { 00:22:11.069 "lvol_store_uuid": "49720080-7b36-4a84-be68-319b606f9891", 00:22:11.069 "base_bdev": "nvme0n1", 00:22:11.069 "thin_provision": true, 00:22:11.069 "num_allocated_clusters": 0, 00:22:11.069 "snapshot": false, 00:22:11.069 "clone": false, 00:22:11.069 "esnap_clone": false 00:22:11.070 } 00:22:11.070 } 00:22:11.070 } 00:22:11.070 ]' 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:11.070 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:11.327 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:11.585 { 00:22:11.585 "name": "ef2f7b62-2727-42e8-902f-1e515717ef14", 00:22:11.585 "aliases": [ 00:22:11.585 "lvs/nvme0n1p0" 00:22:11.585 ], 00:22:11.585 "product_name": "Logical Volume", 00:22:11.585 "block_size": 4096, 00:22:11.585 "num_blocks": 26476544, 00:22:11.585 "uuid": "ef2f7b62-2727-42e8-902f-1e515717ef14", 00:22:11.585 "assigned_rate_limits": { 00:22:11.585 "rw_ios_per_sec": 0, 00:22:11.585 "rw_mbytes_per_sec": 0, 00:22:11.585 "r_mbytes_per_sec": 0, 00:22:11.585 "w_mbytes_per_sec": 0 00:22:11.585 }, 00:22:11.585 "claimed": false, 00:22:11.585 "zoned": false, 00:22:11.585 "supported_io_types": { 00:22:11.585 "read": true, 00:22:11.585 "write": true, 00:22:11.585 "unmap": true, 00:22:11.585 "flush": false, 00:22:11.585 "reset": true, 00:22:11.585 "nvme_admin": false, 00:22:11.585 "nvme_io": false, 00:22:11.585 "nvme_io_md": false, 00:22:11.585 "write_zeroes": true, 00:22:11.585 "zcopy": false, 00:22:11.585 "get_zone_info": false, 00:22:11.585 "zone_management": false, 00:22:11.585 "zone_append": false, 00:22:11.585 "compare": false, 00:22:11.585 "compare_and_write": false, 00:22:11.585 "abort": false, 00:22:11.585 "seek_hole": true, 00:22:11.585 "seek_data": true, 00:22:11.585 "copy": false, 00:22:11.585 "nvme_iov_md": false 00:22:11.585 }, 00:22:11.585 "driver_specific": { 00:22:11.585 "lvol": { 00:22:11.585 "lvol_store_uuid": "49720080-7b36-4a84-be68-319b606f9891", 00:22:11.585 "base_bdev": "nvme0n1", 00:22:11.585 "thin_provision": true, 00:22:11.585 "num_allocated_clusters": 0, 00:22:11.585 "snapshot": false, 00:22:11.585 "clone": false, 00:22:11.585 "esnap_clone": false 00:22:11.585 } 00:22:11.585 } 00:22:11.585 } 00:22:11.585 ]' 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:11.585 18:28:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:11.843 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:11.843 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:11.843 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:11.843 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:11.843 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:11.843 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:11.843 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef2f7b62-2727-42e8-902f-1e515717ef14 00:22:12.101 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:12.101 { 00:22:12.101 "name": "ef2f7b62-2727-42e8-902f-1e515717ef14", 00:22:12.101 "aliases": [ 00:22:12.101 "lvs/nvme0n1p0" 00:22:12.101 ], 00:22:12.101 "product_name": "Logical Volume", 00:22:12.101 "block_size": 4096, 00:22:12.101 "num_blocks": 26476544, 00:22:12.101 "uuid": "ef2f7b62-2727-42e8-902f-1e515717ef14", 00:22:12.101 "assigned_rate_limits": { 00:22:12.101 "rw_ios_per_sec": 0, 00:22:12.101 "rw_mbytes_per_sec": 0, 00:22:12.101 "r_mbytes_per_sec": 0, 00:22:12.101 "w_mbytes_per_sec": 0 00:22:12.101 }, 00:22:12.101 "claimed": false, 00:22:12.101 "zoned": false, 00:22:12.101 "supported_io_types": { 00:22:12.101 "read": true, 00:22:12.101 "write": true, 00:22:12.101 "unmap": true, 00:22:12.101 "flush": false, 00:22:12.101 "reset": true, 00:22:12.101 "nvme_admin": false, 00:22:12.101 "nvme_io": false, 00:22:12.101 "nvme_io_md": false, 00:22:12.101 "write_zeroes": true, 00:22:12.101 "zcopy": false, 00:22:12.101 "get_zone_info": false, 00:22:12.101 "zone_management": false, 00:22:12.101 "zone_append": false, 00:22:12.101 "compare": false, 00:22:12.101 "compare_and_write": false, 00:22:12.101 "abort": false, 00:22:12.101 "seek_hole": true, 00:22:12.101 "seek_data": true, 00:22:12.101 "copy": false, 00:22:12.101 "nvme_iov_md": false 00:22:12.101 }, 00:22:12.101 "driver_specific": { 00:22:12.101 "lvol": { 00:22:12.101 "lvol_store_uuid": "49720080-7b36-4a84-be68-319b606f9891", 00:22:12.101 "base_bdev": "nvme0n1", 00:22:12.101 "thin_provision": true, 00:22:12.101 "num_allocated_clusters": 0, 00:22:12.101 "snapshot": false, 00:22:12.101 "clone": false, 00:22:12.101 "esnap_clone": false 00:22:12.101 } 00:22:12.101 } 00:22:12.101 } 00:22:12.101 ]' 00:22:12.101 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ef2f7b62-2727-42e8-902f-1e515717ef14 --l2p_dram_limit 10' 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:12.358 18:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ef2f7b62-2727-42e8-902f-1e515717ef14 --l2p_dram_limit 10 -c nvc0n1p0 00:22:12.615 [2024-07-11 18:28:58.772057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.615 [2024-07-11 18:28:58.772132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:12.615 [2024-07-11 18:28:58.772158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:12.616 [2024-07-11 18:28:58.772172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.772264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.772283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.616 [2024-07-11 18:28:58.772302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:12.616 [2024-07-11 18:28:58.772314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.772361] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:12.616 [2024-07-11 18:28:58.772684] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:12.616 [2024-07-11 18:28:58.772716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.772729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.616 [2024-07-11 18:28:58.772744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:22:12.616 [2024-07-11 18:28:58.772766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.772899] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 85c256d6-5302-4f92-a9ec-36f2d09512df 00:22:12.616 [2024-07-11 18:28:58.774140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.774330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:12.616 [2024-07-11 18:28:58.774485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:12.616 [2024-07-11 18:28:58.774658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.779244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.779452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.616 [2024-07-11 18:28:58.779594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.476 ms 00:22:12.616 [2024-07-11 18:28:58.779751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.779909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.780003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.616 [2024-07-11 18:28:58.780161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:12.616 [2024-07-11 18:28:58.780192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.780282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.780307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:12.616 [2024-07-11 18:28:58.780322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:12.616 [2024-07-11 18:28:58.780336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.780371] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:12.616 [2024-07-11 18:28:58.782062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.782240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.616 [2024-07-11 18:28:58.782388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:22:12.616 [2024-07-11 18:28:58.782461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.782628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.782770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:12.616 [2024-07-11 18:28:58.782913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:12.616 [2024-07-11 18:28:58.783072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.783307] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:12.616 [2024-07-11 18:28:58.783633] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:12.616 [2024-07-11 18:28:58.783674] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:12.616 [2024-07-11 18:28:58.783693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:12.616 [2024-07-11 18:28:58.783711] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:12.616 [2024-07-11 18:28:58.783726] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:12.616 [2024-07-11 18:28:58.783742] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:12.616 [2024-07-11 18:28:58.783757] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:12.616 [2024-07-11 18:28:58.783770] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:12.616 [2024-07-11 18:28:58.783783] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:12.616 [2024-07-11 18:28:58.783799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.783812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:12.616 [2024-07-11 18:28:58.783827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:22:12.616 [2024-07-11 18:28:58.783839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.783960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.616 [2024-07-11 18:28:58.783987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:12.616 [2024-07-11 18:28:58.784006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:12.616 [2024-07-11 18:28:58.784019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.616 [2024-07-11 18:28:58.784171] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:12.616 [2024-07-11 18:28:58.784194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:12.616 [2024-07-11 18:28:58.784211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.616 [2024-07-11 18:28:58.784224] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.616 [2024-07-11 18:28:58.784239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:12.616 [2024-07-11 18:28:58.784251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:12.616 [2024-07-11 18:28:58.784264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:12.616 [2024-07-11 18:28:58.784276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:12.616 [2024-07-11 18:28:58.784289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:12.616 [2024-07-11 18:28:58.784300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.616 [2024-07-11 18:28:58.784313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:12.616 [2024-07-11 18:28:58.784325] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:12.616 [2024-07-11 18:28:58.784338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.616 [2024-07-11 18:28:58.784349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:12.617 [2024-07-11 18:28:58.784373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:12.617 [2024-07-11 18:28:58.784396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:12.617 [2024-07-11 18:28:58.784448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:12.617 [2024-07-11 18:28:58.784464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:12.617 [2024-07-11 18:28:58.784489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.617 [2024-07-11 18:28:58.784514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:12.617 [2024-07-11 18:28:58.784525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784538] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.617 [2024-07-11 18:28:58.784549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:12.617 [2024-07-11 18:28:58.784562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.617 [2024-07-11 18:28:58.784588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:12.617 [2024-07-11 18:28:58.784600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.617 [2024-07-11 18:28:58.784626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:12.617 [2024-07-11 18:28:58.784640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.617 [2024-07-11 18:28:58.784664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:12.617 [2024-07-11 18:28:58.784675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:12.617 [2024-07-11 18:28:58.784688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.617 [2024-07-11 18:28:58.784700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:12.617 [2024-07-11 18:28:58.784713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:12.617 [2024-07-11 18:28:58.784724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:12.617 [2024-07-11 18:28:58.784749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:12.617 [2024-07-11 18:28:58.784762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784772] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:12.617 [2024-07-11 18:28:58.784788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:12.617 [2024-07-11 18:28:58.784804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.617 [2024-07-11 18:28:58.784821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.617 [2024-07-11 18:28:58.784843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:12.617 [2024-07-11 18:28:58.784861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:12.617 [2024-07-11 18:28:58.784883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:12.617 [2024-07-11 18:28:58.784911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:12.617 [2024-07-11 18:28:58.784926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:12.617 [2024-07-11 18:28:58.784941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:12.617 [2024-07-11 18:28:58.784969] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:12.617 [2024-07-11 18:28:58.784990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.617 [2024-07-11 18:28:58.785004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:12.617 [2024-07-11 18:28:58.785018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:12.617 [2024-07-11 18:28:58.785030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:12.617 [2024-07-11 18:28:58.785044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:12.617 [2024-07-11 18:28:58.785056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:12.617 [2024-07-11 18:28:58.785070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:12.617 [2024-07-11 18:28:58.785099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:12.617 [2024-07-11 18:28:58.785119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:12.617 [2024-07-11 18:28:58.785131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:12.617 [2024-07-11 18:28:58.785146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:12.617 [2024-07-11 18:28:58.785158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:12.617 [2024-07-11 18:28:58.785172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:12.617 [2024-07-11 18:28:58.785184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:12.617 [2024-07-11 18:28:58.785201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:12.617 [2024-07-11 18:28:58.785223] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:12.617 [2024-07-11 18:28:58.785251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.617 [2024-07-11 18:28:58.785275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:12.617 [2024-07-11 18:28:58.785299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:12.617 [2024-07-11 18:28:58.785312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:12.617 [2024-07-11 18:28:58.785327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:12.617 [2024-07-11 18:28:58.785341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.617 [2024-07-11 18:28:58.785356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:12.617 [2024-07-11 18:28:58.785369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:22:12.617 [2024-07-11 18:28:58.785396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.617 [2024-07-11 18:28:58.785484] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:12.617 [2024-07-11 18:28:58.785511] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:14.515 [2024-07-11 18:29:00.685469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.685553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:14.515 [2024-07-11 18:29:00.685576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1899.993 ms 00:22:14.515 [2024-07-11 18:29:00.685592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.693140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.693203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.515 [2024-07-11 18:29:00.693223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.454 ms 00:22:14.515 [2024-07-11 18:29:00.693239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.693383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.693408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:14.515 [2024-07-11 18:29:00.693423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:14.515 [2024-07-11 18:29:00.693438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.701687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.701740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.515 [2024-07-11 18:29:00.701759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.187 ms 00:22:14.515 [2024-07-11 18:29:00.701774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.701819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.701840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:14.515 [2024-07-11 18:29:00.701853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:14.515 [2024-07-11 18:29:00.701868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.702260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.702286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:14.515 [2024-07-11 18:29:00.702301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:22:14.515 [2024-07-11 18:29:00.702315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.702463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.702496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:14.515 [2024-07-11 18:29:00.702510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:22:14.515 [2024-07-11 18:29:00.702534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.708274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.708321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:14.515 [2024-07-11 18:29:00.708339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.713 ms 00:22:14.515 [2024-07-11 18:29:00.708354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.717592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:14.515 [2024-07-11 18:29:00.720366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.720403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:14.515 [2024-07-11 18:29:00.720423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.903 ms 00:22:14.515 [2024-07-11 18:29:00.720446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.770343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.770419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:14.515 [2024-07-11 18:29:00.770451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.853 ms 00:22:14.515 [2024-07-11 18:29:00.770464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.770710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.770732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.515 [2024-07-11 18:29:00.770748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:22:14.515 [2024-07-11 18:29:00.770761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.774304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.774348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:14.515 [2024-07-11 18:29:00.774381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:22:14.515 [2024-07-11 18:29:00.774406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.777416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.777458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:14.515 [2024-07-11 18:29:00.777480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:22:14.515 [2024-07-11 18:29:00.777492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.777855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.777880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:14.515 [2024-07-11 18:29:00.777897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:22:14.515 [2024-07-11 18:29:00.777910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.805677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.805746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:14.515 [2024-07-11 18:29:00.805771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.727 ms 00:22:14.515 [2024-07-11 18:29:00.805788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.809840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.809886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:14.515 [2024-07-11 18:29:00.809908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.994 ms 00:22:14.515 [2024-07-11 18:29:00.809921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.813521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.813565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:14.515 [2024-07-11 18:29:00.813586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.545 ms 00:22:14.515 [2024-07-11 18:29:00.813598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.817361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.817406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.515 [2024-07-11 18:29:00.817427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:22:14.515 [2024-07-11 18:29:00.817440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.817500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.817520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.515 [2024-07-11 18:29:00.817536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:14.515 [2024-07-11 18:29:00.817559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 [2024-07-11 18:29:00.817639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.515 [2024-07-11 18:29:00.817656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.515 [2024-07-11 18:29:00.817671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:14.515 [2024-07-11 18:29:00.817686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.515 { 00:22:14.515 "name": "ftl0", 00:22:14.515 "uuid": "85c256d6-5302-4f92-a9ec-36f2d09512df" 00:22:14.515 } 00:22:14.515 [2024-07-11 18:29:00.818747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2046.215 ms, result 0 00:22:14.515 18:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:14.515 18:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:14.777 18:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:14.777 18:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:14.777 18:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:15.035 /dev/nbd0 00:22:15.035 18:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:15.035 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:15.035 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:22:15.035 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:15.035 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:15.035 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:15.036 1+0 records in 00:22:15.036 1+0 records out 00:22:15.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326209 s, 12.6 MB/s 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:22:15.036 18:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:15.294 [2024-07-11 18:29:01.498285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:15.294 [2024-07-11 18:29:01.498460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93439 ] 00:22:15.294 [2024-07-11 18:29:01.649215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.294 [2024-07-11 18:29:01.690120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.604  Copying: 168/1024 [MB] (168 MBps) Copying: 342/1024 [MB] (173 MBps) Copying: 516/1024 [MB] (174 MBps) Copying: 690/1024 [MB] (174 MBps) Copying: 863/1024 [MB] (173 MBps) Copying: 1024/1024 [MB] (average 172 MBps) 00:22:21.604 00:22:21.604 18:29:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:24.135 18:29:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:24.135 [2024-07-11 18:29:10.071702] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:24.135 [2024-07-11 18:29:10.071904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93532 ] 00:22:24.135 [2024-07-11 18:29:10.221936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.135 [2024-07-11 18:29:10.264316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.024  Copying: 15/1024 [MB] (15 MBps) Copying: 30/1024 [MB] (14 MBps) Copying: 44/1024 [MB] (13 MBps) Copying: 58/1024 [MB] (13 MBps) Copying: 71/1024 [MB] (13 MBps) Copying: 85/1024 [MB] (13 MBps) Copying: 100/1024 [MB] (15 MBps) Copying: 116/1024 [MB] (15 MBps) Copying: 132/1024 [MB] (15 MBps) Copying: 148/1024 [MB] (16 MBps) Copying: 165/1024 [MB] (16 MBps) Copying: 181/1024 [MB] (16 MBps) Copying: 197/1024 [MB] (16 MBps) Copying: 214/1024 [MB] (16 MBps) Copying: 230/1024 [MB] (16 MBps) Copying: 246/1024 [MB] (16 MBps) Copying: 261/1024 [MB] (15 MBps) Copying: 277/1024 [MB] (15 MBps) Copying: 293/1024 [MB] (15 MBps) Copying: 309/1024 [MB] (16 MBps) Copying: 325/1024 [MB] (15 MBps) Copying: 341/1024 [MB] (15 MBps) Copying: 357/1024 [MB] (15 MBps) Copying: 373/1024 [MB] (16 MBps) Copying: 389/1024 [MB] (16 MBps) Copying: 405/1024 [MB] (15 MBps) Copying: 421/1024 [MB] (16 MBps) Copying: 437/1024 [MB] (16 MBps) Copying: 453/1024 [MB] (15 MBps) Copying: 469/1024 [MB] (15 MBps) Copying: 484/1024 [MB] (15 MBps) Copying: 500/1024 [MB] (15 MBps) Copying: 515/1024 [MB] (15 MBps) Copying: 529/1024 [MB] (14 MBps) Copying: 545/1024 [MB] (15 MBps) Copying: 560/1024 [MB] (14 MBps) Copying: 575/1024 [MB] (15 MBps) Copying: 591/1024 [MB] (15 MBps) Copying: 606/1024 [MB] (15 MBps) Copying: 622/1024 [MB] (15 MBps) Copying: 637/1024 [MB] (15 MBps) Copying: 652/1024 [MB] (15 MBps) Copying: 668/1024 [MB] (15 MBps) Copying: 684/1024 [MB] (15 MBps) Copying: 699/1024 [MB] (15 MBps) Copying: 715/1024 [MB] (15 MBps) Copying: 731/1024 [MB] (15 MBps) Copying: 746/1024 [MB] (15 MBps) Copying: 762/1024 [MB] (15 MBps) Copying: 778/1024 [MB] (15 MBps) Copying: 794/1024 [MB] (16 MBps) Copying: 810/1024 [MB] (16 MBps) Copying: 826/1024 [MB] (16 MBps) Copying: 842/1024 [MB] (15 MBps) Copying: 858/1024 [MB] (15 MBps) Copying: 873/1024 [MB] (15 MBps) Copying: 889/1024 [MB] (15 MBps) Copying: 904/1024 [MB] (15 MBps) Copying: 920/1024 [MB] (15 MBps) Copying: 935/1024 [MB] (15 MBps) Copying: 950/1024 [MB] (15 MBps) Copying: 965/1024 [MB] (15 MBps) Copying: 980/1024 [MB] (15 MBps) Copying: 996/1024 [MB] (15 MBps) Copying: 1011/1024 [MB] (14 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:23:30.024 00:23:30.024 18:30:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:30.024 18:30:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:30.282 18:30:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:30.541 [2024-07-11 18:30:16.854773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.854845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:30.541 [2024-07-11 18:30:16.854864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:30.541 [2024-07-11 18:30:16.854876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.854908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:30.541 [2024-07-11 18:30:16.855411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.855464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:30.541 [2024-07-11 18:30:16.855498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:23:30.541 [2024-07-11 18:30:16.855509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.858134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.858169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:30.541 [2024-07-11 18:30:16.858189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:23:30.541 [2024-07-11 18:30:16.858200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.874802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.874857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:30.541 [2024-07-11 18:30:16.874892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.574 ms 00:23:30.541 [2024-07-11 18:30:16.874917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.880734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.880771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:30.541 [2024-07-11 18:30:16.880804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.774 ms 00:23:30.541 [2024-07-11 18:30:16.880814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.882271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.882468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:30.541 [2024-07-11 18:30:16.882610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:23:30.541 [2024-07-11 18:30:16.882632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.888878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.888932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:30.541 [2024-07-11 18:30:16.888965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.195 ms 00:23:30.541 [2024-07-11 18:30:16.888976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.889145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.889164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:30.541 [2024-07-11 18:30:16.889179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:23:30.541 [2024-07-11 18:30:16.889189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.891872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.891937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:30.541 [2024-07-11 18:30:16.891970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:23:30.541 [2024-07-11 18:30:16.891980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.894178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.894390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:30.541 [2024-07-11 18:30:16.894421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:23:30.541 [2024-07-11 18:30:16.894433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.895742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.895788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.541 [2024-07-11 18:30:16.895822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:23:30.541 [2024-07-11 18:30:16.895833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.897018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.541 [2024-07-11 18:30:16.897050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.541 [2024-07-11 18:30:16.897081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:23:30.541 [2024-07-11 18:30:16.897091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.541 [2024-07-11 18:30:16.897155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.541 [2024-07-11 18:30:16.897178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.541 [2024-07-11 18:30:16.897394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.897990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.542 [2024-07-11 18:30:16.898377] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.542 [2024-07-11 18:30:16.898407] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c256d6-5302-4f92-a9ec-36f2d09512df 00:23:30.542 [2024-07-11 18:30:16.898418] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:30.542 [2024-07-11 18:30:16.898437] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:30.542 [2024-07-11 18:30:16.898447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:30.542 [2024-07-11 18:30:16.898459] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:30.542 [2024-07-11 18:30:16.898484] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.542 [2024-07-11 18:30:16.898497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.542 [2024-07-11 18:30:16.898508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.542 [2024-07-11 18:30:16.898519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.542 [2024-07-11 18:30:16.898528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.542 [2024-07-11 18:30:16.898543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.542 [2024-07-11 18:30:16.898554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.542 [2024-07-11 18:30:16.898569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:23:30.542 [2024-07-11 18:30:16.898580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.542 [2024-07-11 18:30:16.899921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.542 [2024-07-11 18:30:16.899946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.542 [2024-07-11 18:30:16.899963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:23:30.542 [2024-07-11 18:30:16.899973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.542 [2024-07-11 18:30:16.900070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.543 [2024-07-11 18:30:16.900131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.543 [2024-07-11 18:30:16.900147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:30.543 [2024-07-11 18:30:16.900157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.904860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.904889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.543 [2024-07-11 18:30:16.904905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.904915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.904984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.905000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.543 [2024-07-11 18:30:16.905012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.905022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.905103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.905132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.543 [2024-07-11 18:30:16.905156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.905177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.905204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.905217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.543 [2024-07-11 18:30:16.905240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.905250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.912711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.912769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.543 [2024-07-11 18:30:16.912805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.912816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.918808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.918856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:30.543 [2024-07-11 18:30:16.918891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.918901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.918972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.918987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.543 [2024-07-11 18:30:16.919002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.919012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.919078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.919092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.543 [2024-07-11 18:30:16.919451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.919497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.919700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.919759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.543 [2024-07-11 18:30:16.919804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.919840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.919897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.919914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:30.543 [2024-07-11 18:30:16.919928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.919938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.919990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.920005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.543 [2024-07-11 18:30:16.920020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.920042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.920154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.543 [2024-07-11 18:30:16.920176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.543 [2024-07-11 18:30:16.920201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.543 [2024-07-11 18:30:16.920214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.543 [2024-07-11 18:30:16.920412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.583 ms, result 0 00:23:30.543 true 00:23:30.543 18:30:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 93308 00:23:30.543 18:30:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid93308 00:23:30.543 18:30:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:30.802 [2024-07-11 18:30:17.040770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:30.802 [2024-07-11 18:30:17.040944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94200 ] 00:23:30.802 [2024-07-11 18:30:17.189068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.061 [2024-07-11 18:30:17.225234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.087  Copying: 211/1024 [MB] (211 MBps) Copying: 423/1024 [MB] (211 MBps) Copying: 634/1024 [MB] (211 MBps) Copying: 834/1024 [MB] (199 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:23:36.087 00:23:36.087 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 93308 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:36.087 18:30:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:36.087 [2024-07-11 18:30:22.498138] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:36.087 [2024-07-11 18:30:22.498322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94259 ] 00:23:36.345 [2024-07-11 18:30:22.644387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.345 [2024-07-11 18:30:22.678770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.603 [2024-07-11 18:30:22.763547] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.603 [2024-07-11 18:30:22.763676] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.603 [2024-07-11 18:30:22.829100] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:36.603 [2024-07-11 18:30:22.829406] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:36.603 [2024-07-11 18:30:22.829739] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:36.860 [2024-07-11 18:30:23.105461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.860 [2024-07-11 18:30:23.105525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.860 [2024-07-11 18:30:23.105558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.861 [2024-07-11 18:30:23.105578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.105636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.105652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.861 [2024-07-11 18:30:23.105670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:36.861 [2024-07-11 18:30:23.105690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.105722] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.861 [2024-07-11 18:30:23.105977] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.861 [2024-07-11 18:30:23.106009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.106020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.861 [2024-07-11 18:30:23.106030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:36.861 [2024-07-11 18:30:23.106047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.107180] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.861 [2024-07-11 18:30:23.109293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.109329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.861 [2024-07-11 18:30:23.109370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.114 ms 00:23:36.861 [2024-07-11 18:30:23.109380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.109447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.109467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.861 [2024-07-11 18:30:23.109478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:36.861 [2024-07-11 18:30:23.109510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.113553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.113588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.861 [2024-07-11 18:30:23.113627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.980 ms 00:23:36.861 [2024-07-11 18:30:23.113643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.113734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.113753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.861 [2024-07-11 18:30:23.113775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:36.861 [2024-07-11 18:30:23.113791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.113859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.113874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.861 [2024-07-11 18:30:23.113885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:36.861 [2024-07-11 18:30:23.113902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.113931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.861 [2024-07-11 18:30:23.115168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.115208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.861 [2024-07-11 18:30:23.115245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:23:36.861 [2024-07-11 18:30:23.115255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.115294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.115308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.861 [2024-07-11 18:30:23.115319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:36.861 [2024-07-11 18:30:23.115329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.115359] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.861 [2024-07-11 18:30:23.115385] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.861 [2024-07-11 18:30:23.115444] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.861 [2024-07-11 18:30:23.115469] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:36.861 [2024-07-11 18:30:23.115577] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.861 [2024-07-11 18:30:23.115591] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.861 [2024-07-11 18:30:23.115603] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:36.861 [2024-07-11 18:30:23.115616] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.861 [2024-07-11 18:30:23.115642] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.861 [2024-07-11 18:30:23.115681] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.861 [2024-07-11 18:30:23.115689] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.861 [2024-07-11 18:30:23.115710] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.861 [2024-07-11 18:30:23.115719] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.861 [2024-07-11 18:30:23.115728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.115737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.861 [2024-07-11 18:30:23.115747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:23:36.861 [2024-07-11 18:30:23.115758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.115830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.115840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.861 [2024-07-11 18:30:23.115859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:36.861 [2024-07-11 18:30:23.115867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.115962] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.861 [2024-07-11 18:30:23.115978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.861 [2024-07-11 18:30:23.115988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.861 [2024-07-11 18:30:23.115997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116006] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.861 [2024-07-11 18:30:23.116015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.861 [2024-07-11 18:30:23.116033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.861 [2024-07-11 18:30:23.116041] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116050] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.861 [2024-07-11 18:30:23.116058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.861 [2024-07-11 18:30:23.116073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.861 [2024-07-11 18:30:23.116082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.861 [2024-07-11 18:30:23.116091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.861 [2024-07-11 18:30:23.116100] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:36.861 [2024-07-11 18:30:23.116124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.861 [2024-07-11 18:30:23.116142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:36.861 [2024-07-11 18:30:23.116150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.861 [2024-07-11 18:30:23.116168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.861 [2024-07-11 18:30:23.116456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.861 [2024-07-11 18:30:23.116500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.861 [2024-07-11 18:30:23.116584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.861 [2024-07-11 18:30:23.116708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.861 [2024-07-11 18:30:23.116804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.861 [2024-07-11 18:30:23.116907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:36.861 [2024-07-11 18:30:23.116951] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.861 [2024-07-11 18:30:23.116986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.861 [2024-07-11 18:30:23.117141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:36.861 [2024-07-11 18:30:23.117190] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.861 [2024-07-11 18:30:23.117228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.861 [2024-07-11 18:30:23.117327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:36.861 [2024-07-11 18:30:23.117386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.861 [2024-07-11 18:30:23.117422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.861 [2024-07-11 18:30:23.117456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:36.861 [2024-07-11 18:30:23.117584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.861 [2024-07-11 18:30:23.117619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.861 [2024-07-11 18:30:23.117711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:36.861 [2024-07-11 18:30:23.117730] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.861 [2024-07-11 18:30:23.117745] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.861 [2024-07-11 18:30:23.117757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.861 [2024-07-11 18:30:23.117767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.861 [2024-07-11 18:30:23.117786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.861 [2024-07-11 18:30:23.117796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.861 [2024-07-11 18:30:23.117805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.861 [2024-07-11 18:30:23.117814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.861 [2024-07-11 18:30:23.117823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.861 [2024-07-11 18:30:23.117832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.861 [2024-07-11 18:30:23.117841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.861 [2024-07-11 18:30:23.117852] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.861 [2024-07-11 18:30:23.117864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.861 [2024-07-11 18:30:23.117880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.861 [2024-07-11 18:30:23.117890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:36.861 [2024-07-11 18:30:23.117900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:36.861 [2024-07-11 18:30:23.117909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:36.861 [2024-07-11 18:30:23.117921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:36.861 [2024-07-11 18:30:23.117932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:36.861 [2024-07-11 18:30:23.117942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:36.861 [2024-07-11 18:30:23.117952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:36.861 [2024-07-11 18:30:23.117961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:36.861 [2024-07-11 18:30:23.117971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:36.861 [2024-07-11 18:30:23.117981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:36.861 [2024-07-11 18:30:23.117991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:36.861 [2024-07-11 18:30:23.118010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:36.861 [2024-07-11 18:30:23.118020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:36.861 [2024-07-11 18:30:23.118030] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.861 [2024-07-11 18:30:23.118041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.861 [2024-07-11 18:30:23.118053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.861 [2024-07-11 18:30:23.118063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.861 [2024-07-11 18:30:23.118073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.861 [2024-07-11 18:30:23.118097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.861 [2024-07-11 18:30:23.118114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.118126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.861 [2024-07-11 18:30:23.118145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:23:36.861 [2024-07-11 18:30:23.118156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.135695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.135909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.861 [2024-07-11 18:30:23.136026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.454 ms 00:23:36.861 [2024-07-11 18:30:23.136073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.136323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.136393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.861 [2024-07-11 18:30:23.136436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:36.861 [2024-07-11 18:30:23.136534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.143448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.143666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.861 [2024-07-11 18:30:23.143797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:23:36.861 [2024-07-11 18:30:23.143849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.143990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.144046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.861 [2024-07-11 18:30:23.144133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.861 [2024-07-11 18:30:23.144240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.144610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.144759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.861 [2024-07-11 18:30:23.144855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:23:36.861 [2024-07-11 18:30:23.144976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.145202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.145266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.861 [2024-07-11 18:30:23.145374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:23:36.861 [2024-07-11 18:30:23.145422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.150000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.150220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.861 [2024-07-11 18:30:23.150349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:23:36.861 [2024-07-11 18:30:23.150452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.153051] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.861 [2024-07-11 18:30:23.153319] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.861 [2024-07-11 18:30:23.153464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.153585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.861 [2024-07-11 18:30:23.153632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:23:36.861 [2024-07-11 18:30:23.153720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.169560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.861 [2024-07-11 18:30:23.169760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.861 [2024-07-11 18:30:23.169877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.674 ms 00:23:36.861 [2024-07-11 18:30:23.169925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.861 [2024-07-11 18:30:23.171893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.172072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.862 [2024-07-11 18:30:23.172238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:23:36.862 [2024-07-11 18:30:23.172287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.174102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.174293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.862 [2024-07-11 18:30:23.174409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:23:36.862 [2024-07-11 18:30:23.174550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.175082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.175262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.862 [2024-07-11 18:30:23.175300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:23:36.862 [2024-07-11 18:30:23.175313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.202881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.203230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.862 [2024-07-11 18:30:23.203354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.539 ms 00:23:36.862 [2024-07-11 18:30:23.203459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.211963] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:36.862 [2024-07-11 18:30:23.214905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.215147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.862 [2024-07-11 18:30:23.215283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.247 ms 00:23:36.862 [2024-07-11 18:30:23.215306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.215421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.215446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.862 [2024-07-11 18:30:23.215460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.862 [2024-07-11 18:30:23.215481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.215574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.215603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.862 [2024-07-11 18:30:23.215616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:36.862 [2024-07-11 18:30:23.215627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.215659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.215685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.862 [2024-07-11 18:30:23.215702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.862 [2024-07-11 18:30:23.215720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.215786] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.862 [2024-07-11 18:30:23.215804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.215816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.862 [2024-07-11 18:30:23.215827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:36.862 [2024-07-11 18:30:23.215837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.219196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.219264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.862 [2024-07-11 18:30:23.219281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.309 ms 00:23:36.862 [2024-07-11 18:30:23.219310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.219390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.862 [2024-07-11 18:30:23.219409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.862 [2024-07-11 18:30:23.219422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:36.862 [2024-07-11 18:30:23.219433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.862 [2024-07-11 18:30:23.220871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 114.871 ms, result 0 00:24:24.061  Copying: 23/1024 [MB] (23 MBps) Copying: 45/1024 [MB] (22 MBps) Copying: 67/1024 [MB] (22 MBps) Copying: 89/1024 [MB] (21 MBps) Copying: 110/1024 [MB] (21 MBps) Copying: 132/1024 [MB] (21 MBps) Copying: 153/1024 [MB] (21 MBps) Copying: 175/1024 [MB] (21 MBps) Copying: 197/1024 [MB] (22 MBps) Copying: 219/1024 [MB] (21 MBps) Copying: 241/1024 [MB] (21 MBps) Copying: 263/1024 [MB] (22 MBps) Copying: 285/1024 [MB] (21 MBps) Copying: 306/1024 [MB] (21 MBps) Copying: 328/1024 [MB] (21 MBps) Copying: 350/1024 [MB] (22 MBps) Copying: 372/1024 [MB] (21 MBps) Copying: 394/1024 [MB] (22 MBps) Copying: 417/1024 [MB] (22 MBps) Copying: 439/1024 [MB] (22 MBps) Copying: 461/1024 [MB] (21 MBps) Copying: 483/1024 [MB] (22 MBps) Copying: 506/1024 [MB] (22 MBps) Copying: 527/1024 [MB] (21 MBps) Copying: 550/1024 [MB] (22 MBps) Copying: 572/1024 [MB] (22 MBps) Copying: 595/1024 [MB] (22 MBps) Copying: 617/1024 [MB] (22 MBps) Copying: 640/1024 [MB] (22 MBps) Copying: 662/1024 [MB] (22 MBps) Copying: 684/1024 [MB] (22 MBps) Copying: 707/1024 [MB] (22 MBps) Copying: 730/1024 [MB] (22 MBps) Copying: 752/1024 [MB] (22 MBps) Copying: 775/1024 [MB] (23 MBps) Copying: 797/1024 [MB] (22 MBps) Copying: 820/1024 [MB] (22 MBps) Copying: 842/1024 [MB] (22 MBps) Copying: 865/1024 [MB] (22 MBps) Copying: 888/1024 [MB] (22 MBps) Copying: 910/1024 [MB] (22 MBps) Copying: 932/1024 [MB] (21 MBps) Copying: 954/1024 [MB] (21 MBps) Copying: 975/1024 [MB] (21 MBps) Copying: 998/1024 [MB] (22 MBps) Copying: 1021/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 21 MBps)[2024-07-11 18:31:10.210939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.061 [2024-07-11 18:31:10.211008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:24.061 [2024-07-11 18:31:10.211030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:24.061 [2024-07-11 18:31:10.211053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.061 [2024-07-11 18:31:10.214294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:24.061 [2024-07-11 18:31:10.216241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.061 [2024-07-11 18:31:10.216332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:24.061 [2024-07-11 18:31:10.216350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:24:24.061 [2024-07-11 18:31:10.216362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.061 [2024-07-11 18:31:10.233237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.061 [2024-07-11 18:31:10.233295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:24.061 [2024-07-11 18:31:10.233330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.332 ms 00:24:24.061 [2024-07-11 18:31:10.233350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.061 [2024-07-11 18:31:10.254519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.061 [2024-07-11 18:31:10.254624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:24.061 [2024-07-11 18:31:10.254670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.147 ms 00:24:24.061 [2024-07-11 18:31:10.254681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.061 [2024-07-11 18:31:10.260845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.061 [2024-07-11 18:31:10.260877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:24.061 [2024-07-11 18:31:10.260906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:24:24.061 [2024-07-11 18:31:10.260915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.061 [2024-07-11 18:31:10.262369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.061 [2024-07-11 18:31:10.262404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:24.061 [2024-07-11 18:31:10.262447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:24:24.062 [2024-07-11 18:31:10.262457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.062 [2024-07-11 18:31:10.265644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.062 [2024-07-11 18:31:10.265682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:24.062 [2024-07-11 18:31:10.265712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.152 ms 00:24:24.062 [2024-07-11 18:31:10.265722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.062 [2024-07-11 18:31:10.349311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.062 [2024-07-11 18:31:10.349361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:24.062 [2024-07-11 18:31:10.349409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.549 ms 00:24:24.062 [2024-07-11 18:31:10.349419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.062 [2024-07-11 18:31:10.351418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.062 [2024-07-11 18:31:10.351470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:24.062 [2024-07-11 18:31:10.351484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.970 ms 00:24:24.062 [2024-07-11 18:31:10.351494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.062 [2024-07-11 18:31:10.352985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.062 [2024-07-11 18:31:10.353035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:24.062 [2024-07-11 18:31:10.353063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:24:24.062 [2024-07-11 18:31:10.353072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.062 [2024-07-11 18:31:10.354307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.062 [2024-07-11 18:31:10.354340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:24.062 [2024-07-11 18:31:10.354368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:24:24.062 [2024-07-11 18:31:10.354377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.062 [2024-07-11 18:31:10.355593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.062 [2024-07-11 18:31:10.355644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:24.062 [2024-07-11 18:31:10.355673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:24:24.062 [2024-07-11 18:31:10.355696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.062 [2024-07-11 18:31:10.355730] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:24.062 [2024-07-11 18:31:10.355750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 89344 / 261120 wr_cnt: 1 state: open 00:24:24.062 [2024-07-11 18:31:10.355763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.355993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:24.062 [2024-07-11 18:31:10.356567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:24.063 [2024-07-11 18:31:10.356842] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:24.063 [2024-07-11 18:31:10.356868] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c256d6-5302-4f92-a9ec-36f2d09512df 00:24:24.063 [2024-07-11 18:31:10.356879] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 89344 00:24:24.063 [2024-07-11 18:31:10.356888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 90304 00:24:24.063 [2024-07-11 18:31:10.356897] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 89344 00:24:24.063 [2024-07-11 18:31:10.356907] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0107 00:24:24.063 [2024-07-11 18:31:10.356917] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:24.063 [2024-07-11 18:31:10.356926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:24.063 [2024-07-11 18:31:10.356936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:24.063 [2024-07-11 18:31:10.356944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:24.063 [2024-07-11 18:31:10.356953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:24.063 [2024-07-11 18:31:10.356964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.063 [2024-07-11 18:31:10.356979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:24.063 [2024-07-11 18:31:10.356989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:24:24.063 [2024-07-11 18:31:10.356998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.358332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.063 [2024-07-11 18:31:10.358359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:24.063 [2024-07-11 18:31:10.358371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:24:24.063 [2024-07-11 18:31:10.358381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.358471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:24.063 [2024-07-11 18:31:10.358485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:24.063 [2024-07-11 18:31:10.358495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:24.063 [2024-07-11 18:31:10.358524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.362804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.362834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:24.063 [2024-07-11 18:31:10.362846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.362860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.362920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.362934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:24.063 [2024-07-11 18:31:10.362944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.362953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.363016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.363032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:24.063 [2024-07-11 18:31:10.363042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.363051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.363077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.363089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:24.063 [2024-07-11 18:31:10.363137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.363147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.370945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.371000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:24.063 [2024-07-11 18:31:10.371031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.371040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.377757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.377805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:24.063 [2024-07-11 18:31:10.377821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.377830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.377891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.377905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:24.063 [2024-07-11 18:31:10.377915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.377925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.377950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.377970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:24.063 [2024-07-11 18:31:10.377980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.377989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.378065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.378177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:24.063 [2024-07-11 18:31:10.378193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.378214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.378270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.378286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:24.063 [2024-07-11 18:31:10.378302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.378311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.378352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.378365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:24.063 [2024-07-11 18:31:10.378375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.378384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.378429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:24.063 [2024-07-11 18:31:10.378464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:24.063 [2024-07-11 18:31:10.378490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:24.063 [2024-07-11 18:31:10.378499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:24.063 [2024-07-11 18:31:10.378631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 168.806 ms, result 0 00:24:24.631 00:24:24.631 00:24:24.631 18:31:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:26.532 18:31:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:26.820 [2024-07-11 18:31:13.042426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:26.820 [2024-07-11 18:31:13.042641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94754 ] 00:24:26.820 [2024-07-11 18:31:13.197809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.128 [2024-07-11 18:31:13.245751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.128 [2024-07-11 18:31:13.334428] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.128 [2024-07-11 18:31:13.334508] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:27.128 [2024-07-11 18:31:13.492518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.492601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:27.128 [2024-07-11 18:31:13.492643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:27.128 [2024-07-11 18:31:13.492655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.492751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.492772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.128 [2024-07-11 18:31:13.492790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:27.128 [2024-07-11 18:31:13.492801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.492833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:27.128 [2024-07-11 18:31:13.493190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:27.128 [2024-07-11 18:31:13.493219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.493233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.128 [2024-07-11 18:31:13.493253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:24:27.128 [2024-07-11 18:31:13.493264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.494472] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:27.128 [2024-07-11 18:31:13.496914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.496995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:27.128 [2024-07-11 18:31:13.497023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:24:27.128 [2024-07-11 18:31:13.497041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.497121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.497151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:27.128 [2024-07-11 18:31:13.497166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:27.128 [2024-07-11 18:31:13.497177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.501933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.502001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.128 [2024-07-11 18:31:13.502038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.672 ms 00:24:27.128 [2024-07-11 18:31:13.502055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.502169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.502188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.128 [2024-07-11 18:31:13.502200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:27.128 [2024-07-11 18:31:13.502209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.502301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.502335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:27.128 [2024-07-11 18:31:13.502350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:27.128 [2024-07-11 18:31:13.502369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.502410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.128 [2024-07-11 18:31:13.503866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.503966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.128 [2024-07-11 18:31:13.503978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:24:27.128 [2024-07-11 18:31:13.503988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.504031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.504049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:27.128 [2024-07-11 18:31:13.504069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:27.128 [2024-07-11 18:31:13.504078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.504148] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:27.128 [2024-07-11 18:31:13.504183] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:27.128 [2024-07-11 18:31:13.504227] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:27.128 [2024-07-11 18:31:13.504249] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:27.128 [2024-07-11 18:31:13.504343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:27.128 [2024-07-11 18:31:13.504357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:27.128 [2024-07-11 18:31:13.504380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:27.128 [2024-07-11 18:31:13.504401] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:27.128 [2024-07-11 18:31:13.504420] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:27.128 [2024-07-11 18:31:13.504431] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:27.128 [2024-07-11 18:31:13.504440] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:27.128 [2024-07-11 18:31:13.504449] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:27.128 [2024-07-11 18:31:13.504473] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:27.128 [2024-07-11 18:31:13.504483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.504516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:27.128 [2024-07-11 18:31:13.504529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:24:27.128 [2024-07-11 18:31:13.504538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.504665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.504683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:27.128 [2024-07-11 18:31:13.504694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:27.128 [2024-07-11 18:31:13.504704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.504848] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:27.128 [2024-07-11 18:31:13.504869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:27.128 [2024-07-11 18:31:13.504881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.128 [2024-07-11 18:31:13.504905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.128 [2024-07-11 18:31:13.504931] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:27.128 [2024-07-11 18:31:13.504956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:27.128 [2024-07-11 18:31:13.504966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:27.128 [2024-07-11 18:31:13.504977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:27.128 [2024-07-11 18:31:13.504987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:27.128 [2024-07-11 18:31:13.504996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.128 [2024-07-11 18:31:13.505005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:27.128 [2024-07-11 18:31:13.505015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:27.128 [2024-07-11 18:31:13.505027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.128 [2024-07-11 18:31:13.505037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:27.128 [2024-07-11 18:31:13.505047] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:27.128 [2024-07-11 18:31:13.505056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:27.128 [2024-07-11 18:31:13.505075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:27.128 [2024-07-11 18:31:13.505085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:27.128 [2024-07-11 18:31:13.505104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.128 [2024-07-11 18:31:13.505123] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:27.128 [2024-07-11 18:31:13.505132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.128 [2024-07-11 18:31:13.505151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:27.128 [2024-07-11 18:31:13.505160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.128 [2024-07-11 18:31:13.505182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:27.128 [2024-07-11 18:31:13.505208] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.128 [2024-07-11 18:31:13.505289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:27.128 [2024-07-11 18:31:13.505312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.128 [2024-07-11 18:31:13.505330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:27.128 [2024-07-11 18:31:13.505339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:27.128 [2024-07-11 18:31:13.505348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.128 [2024-07-11 18:31:13.505356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:27.128 [2024-07-11 18:31:13.505365] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:27.128 [2024-07-11 18:31:13.505374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:27.128 [2024-07-11 18:31:13.505391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:27.128 [2024-07-11 18:31:13.505401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505410] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:27.128 [2024-07-11 18:31:13.505423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:27.128 [2024-07-11 18:31:13.505433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.128 [2024-07-11 18:31:13.505442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.128 [2024-07-11 18:31:13.505452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:27.128 [2024-07-11 18:31:13.505461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:27.128 [2024-07-11 18:31:13.505470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:27.128 [2024-07-11 18:31:13.505480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:27.128 [2024-07-11 18:31:13.505489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:27.128 [2024-07-11 18:31:13.505498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:27.128 [2024-07-11 18:31:13.505525] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:27.128 [2024-07-11 18:31:13.505537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.128 [2024-07-11 18:31:13.505549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:27.128 [2024-07-11 18:31:13.505559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:27.128 [2024-07-11 18:31:13.505570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:27.128 [2024-07-11 18:31:13.505620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:27.128 [2024-07-11 18:31:13.505654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:27.128 [2024-07-11 18:31:13.505669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:27.128 [2024-07-11 18:31:13.505681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:27.128 [2024-07-11 18:31:13.505692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:27.128 [2024-07-11 18:31:13.505719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:27.128 [2024-07-11 18:31:13.505731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:27.128 [2024-07-11 18:31:13.505742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:27.128 [2024-07-11 18:31:13.505753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:27.128 [2024-07-11 18:31:13.505764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:27.128 [2024-07-11 18:31:13.505776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:27.128 [2024-07-11 18:31:13.505788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:27.128 [2024-07-11 18:31:13.505801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.128 [2024-07-11 18:31:13.505826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:27.128 [2024-07-11 18:31:13.505838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:27.128 [2024-07-11 18:31:13.505849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:27.128 [2024-07-11 18:31:13.505861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:27.128 [2024-07-11 18:31:13.505874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.505903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:27.128 [2024-07-11 18:31:13.505933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:24:27.128 [2024-07-11 18:31:13.505944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.522998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.523076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.128 [2024-07-11 18:31:13.523106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.946 ms 00:24:27.128 [2024-07-11 18:31:13.523125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.128 [2024-07-11 18:31:13.523236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.128 [2024-07-11 18:31:13.523279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:27.128 [2024-07-11 18:31:13.523296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:27.128 [2024-07-11 18:31:13.523314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.531926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.532010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.400 [2024-07-11 18:31:13.532027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.516 ms 00:24:27.400 [2024-07-11 18:31:13.532040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.532116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.532134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.400 [2024-07-11 18:31:13.532155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:27.400 [2024-07-11 18:31:13.532167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.532545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.532579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.400 [2024-07-11 18:31:13.532618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:24:27.400 [2024-07-11 18:31:13.532632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.532790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.532815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.400 [2024-07-11 18:31:13.532828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:24:27.400 [2024-07-11 18:31:13.532851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.538498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.538567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.400 [2024-07-11 18:31:13.538608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.618 ms 00:24:27.400 [2024-07-11 18:31:13.538625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.541425] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:27.400 [2024-07-11 18:31:13.541528] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:27.400 [2024-07-11 18:31:13.541556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.541598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:27.400 [2024-07-11 18:31:13.541639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:24:27.400 [2024-07-11 18:31:13.541659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.558041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.558090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:27.400 [2024-07-11 18:31:13.558118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.314 ms 00:24:27.400 [2024-07-11 18:31:13.558138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.560220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.560251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:27.400 [2024-07-11 18:31:13.560263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:24:27.400 [2024-07-11 18:31:13.560272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.561950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.562008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:27.400 [2024-07-11 18:31:13.562042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.636 ms 00:24:27.400 [2024-07-11 18:31:13.562058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.562503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.562537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:27.400 [2024-07-11 18:31:13.562551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:24:27.400 [2024-07-11 18:31:13.562564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.400 [2024-07-11 18:31:13.578923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.400 [2024-07-11 18:31:13.579003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:27.401 [2024-07-11 18:31:13.579022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.284 ms 00:24:27.401 [2024-07-11 18:31:13.579046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.586594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:27.401 [2024-07-11 18:31:13.588975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.401 [2024-07-11 18:31:13.589003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:27.401 [2024-07-11 18:31:13.589025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.842 ms 00:24:27.401 [2024-07-11 18:31:13.589035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.589139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.401 [2024-07-11 18:31:13.589158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:27.401 [2024-07-11 18:31:13.589173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:27.401 [2024-07-11 18:31:13.589190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.590728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.401 [2024-07-11 18:31:13.590774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:27.401 [2024-07-11 18:31:13.590790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:24:27.401 [2024-07-11 18:31:13.590809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.590845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.401 [2024-07-11 18:31:13.590869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:27.401 [2024-07-11 18:31:13.590880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:27.401 [2024-07-11 18:31:13.590889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.590927] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:27.401 [2024-07-11 18:31:13.590957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.401 [2024-07-11 18:31:13.590998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:27.401 [2024-07-11 18:31:13.591023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:27.401 [2024-07-11 18:31:13.591033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.594828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.401 [2024-07-11 18:31:13.594886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:27.401 [2024-07-11 18:31:13.594902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.774 ms 00:24:27.401 [2024-07-11 18:31:13.594922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.594996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.401 [2024-07-11 18:31:13.595013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:27.401 [2024-07-11 18:31:13.595031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:27.401 [2024-07-11 18:31:13.595057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.401 [2024-07-11 18:31:13.598945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.345 ms, result 0 00:25:07.065  Copying: 1096/1048576 [kB] (1096 kBps) Copying: 3232/1048576 [kB] (2136 kBps) Copying: 16/1024 [MB] (13 MBps) Copying: 43/1024 [MB] (26 MBps) Copying: 71/1024 [MB] (27 MBps) Copying: 99/1024 [MB] (28 MBps) Copying: 127/1024 [MB] (28 MBps) Copying: 154/1024 [MB] (27 MBps) Copying: 181/1024 [MB] (26 MBps) Copying: 209/1024 [MB] (27 MBps) Copying: 236/1024 [MB] (27 MBps) Copying: 262/1024 [MB] (26 MBps) Copying: 289/1024 [MB] (26 MBps) Copying: 316/1024 [MB] (27 MBps) Copying: 343/1024 [MB] (27 MBps) Copying: 370/1024 [MB] (26 MBps) Copying: 398/1024 [MB] (27 MBps) Copying: 425/1024 [MB] (27 MBps) Copying: 453/1024 [MB] (27 MBps) Copying: 481/1024 [MB] (27 MBps) Copying: 510/1024 [MB] (28 MBps) Copying: 538/1024 [MB] (28 MBps) Copying: 567/1024 [MB] (28 MBps) Copying: 595/1024 [MB] (28 MBps) Copying: 623/1024 [MB] (28 MBps) Copying: 652/1024 [MB] (28 MBps) Copying: 680/1024 [MB] (28 MBps) Copying: 708/1024 [MB] (28 MBps) Copying: 736/1024 [MB] (28 MBps) Copying: 764/1024 [MB] (28 MBps) Copying: 793/1024 [MB] (29 MBps) Copying: 823/1024 [MB] (29 MBps) Copying: 851/1024 [MB] (28 MBps) Copying: 879/1024 [MB] (27 MBps) Copying: 907/1024 [MB] (28 MBps) Copying: 935/1024 [MB] (27 MBps) Copying: 963/1024 [MB] (28 MBps) Copying: 992/1024 [MB] (28 MBps) Copying: 1020/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 26 MBps)[2024-07-11 18:31:53.249313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.065 [2024-07-11 18:31:53.250284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:07.065 [2024-07-11 18:31:53.250343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:07.065 [2024-07-11 18:31:53.250388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.065 [2024-07-11 18:31:53.250451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:07.065 [2024-07-11 18:31:53.251105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.251147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:07.066 [2024-07-11 18:31:53.251170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:25:07.066 [2024-07-11 18:31:53.251188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.251586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.251606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:07.066 [2024-07-11 18:31:53.251618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:25:07.066 [2024-07-11 18:31:53.251643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.264412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.264689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:07.066 [2024-07-11 18:31:53.264919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.745 ms 00:25:07.066 [2024-07-11 18:31:53.265155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.274072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.274317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:07.066 [2024-07-11 18:31:53.274548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.663 ms 00:25:07.066 [2024-07-11 18:31:53.274758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.276364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.276611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:07.066 [2024-07-11 18:31:53.276710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:25:07.066 [2024-07-11 18:31:53.276806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.279877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.280002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:07.066 [2024-07-11 18:31:53.280085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:25:07.066 [2024-07-11 18:31:53.280223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.283514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.283780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:07.066 [2024-07-11 18:31:53.283911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:25:07.066 [2024-07-11 18:31:53.284013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.285728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.285753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:07.066 [2024-07-11 18:31:53.285765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:25:07.066 [2024-07-11 18:31:53.285774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.287813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.288026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:07.066 [2024-07-11 18:31:53.288226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:25:07.066 [2024-07-11 18:31:53.288416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.289709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.289962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:07.066 [2024-07-11 18:31:53.290177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:25:07.066 [2024-07-11 18:31:53.290284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.291437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.066 [2024-07-11 18:31:53.291578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:07.066 [2024-07-11 18:31:53.291691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:25:07.066 [2024-07-11 18:31:53.291784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.066 [2024-07-11 18:31:53.291943] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:07.066 [2024-07-11 18:31:53.292074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:07.066 [2024-07-11 18:31:53.292224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:25:07.066 [2024-07-11 18:31:53.292396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.292513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.292683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.292796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.292972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:07.066 [2024-07-11 18:31:53.293818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.293991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:07.067 [2024-07-11 18:31:53.294009] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:07.067 [2024-07-11 18:31:53.294019] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c256d6-5302-4f92-a9ec-36f2d09512df 00:25:07.067 [2024-07-11 18:31:53.294029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:25:07.067 [2024-07-11 18:31:53.294053] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 177344 00:25:07.067 [2024-07-11 18:31:53.294062] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 175360 00:25:07.067 [2024-07-11 18:31:53.294329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0113 00:25:07.067 [2024-07-11 18:31:53.294383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:07.067 [2024-07-11 18:31:53.294461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:07.067 [2024-07-11 18:31:53.294497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:07.067 [2024-07-11 18:31:53.294549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:07.067 [2024-07-11 18:31:53.294608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:07.067 [2024-07-11 18:31:53.294639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.067 [2024-07-11 18:31:53.294671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:07.067 [2024-07-11 18:31:53.294704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:25:07.067 [2024-07-11 18:31:53.294812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.296204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.067 [2024-07-11 18:31:53.296353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:07.067 [2024-07-11 18:31:53.296450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:25:07.067 [2024-07-11 18:31:53.296570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.296706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:07.067 [2024-07-11 18:31:53.296757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:07.067 [2024-07-11 18:31:53.296876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:07.067 [2024-07-11 18:31:53.296934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.301098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.301127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:07.067 [2024-07-11 18:31:53.301140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.301150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.301200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.301220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:07.067 [2024-07-11 18:31:53.301231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.301240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.301288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.301304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:07.067 [2024-07-11 18:31:53.301314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.301323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.301342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.301355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:07.067 [2024-07-11 18:31:53.301370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.301380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.308823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.308871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:07.067 [2024-07-11 18:31:53.308885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.308894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.315090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.315140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:07.067 [2024-07-11 18:31:53.315172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.315182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.315247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.315263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:07.067 [2024-07-11 18:31:53.315273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.315282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.315389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.315406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:07.067 [2024-07-11 18:31:53.315451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.315476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.315575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.315595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:07.067 [2024-07-11 18:31:53.315608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.315618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.315675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.315693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:07.067 [2024-07-11 18:31:53.315705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.315717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.315770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.315785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:07.067 [2024-07-11 18:31:53.315796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.315837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.315886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:07.067 [2024-07-11 18:31:53.315900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:07.067 [2024-07-11 18:31:53.315910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:07.067 [2024-07-11 18:31:53.315926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:07.067 [2024-07-11 18:31:53.316061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.733 ms, result 0 00:25:07.324 00:25:07.324 00:25:07.324 18:31:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:09.223 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:09.223 18:31:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:09.223 [2024-07-11 18:31:55.389650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:09.223 [2024-07-11 18:31:55.389847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95179 ] 00:25:09.223 [2024-07-11 18:31:55.541283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.223 [2024-07-11 18:31:55.585298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.482 [2024-07-11 18:31:55.682791] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.482 [2024-07-11 18:31:55.682894] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.482 [2024-07-11 18:31:55.838962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.839007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.482 [2024-07-11 18:31:55.839025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.482 [2024-07-11 18:31:55.839035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.839127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.839153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.482 [2024-07-11 18:31:55.839169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:09.482 [2024-07-11 18:31:55.839178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.839233] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.482 [2024-07-11 18:31:55.839555] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.482 [2024-07-11 18:31:55.839590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.839602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.482 [2024-07-11 18:31:55.839613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:25:09.482 [2024-07-11 18:31:55.839623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.840677] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:09.482 [2024-07-11 18:31:55.842681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.842718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:09.482 [2024-07-11 18:31:55.842755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:25:09.482 [2024-07-11 18:31:55.842774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.842838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.842855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:09.482 [2024-07-11 18:31:55.842874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:09.482 [2024-07-11 18:31:55.842884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.847253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.847292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.482 [2024-07-11 18:31:55.847359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.292 ms 00:25:09.482 [2024-07-11 18:31:55.847370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.847477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.847510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.482 [2024-07-11 18:31:55.847531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:09.482 [2024-07-11 18:31:55.847542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.847606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.847622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.482 [2024-07-11 18:31:55.847650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:09.482 [2024-07-11 18:31:55.847675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.847716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.482 [2024-07-11 18:31:55.848966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.848999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.482 [2024-07-11 18:31:55.849019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:25:09.482 [2024-07-11 18:31:55.849029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.849072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.849102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.482 [2024-07-11 18:31:55.849129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.482 [2024-07-11 18:31:55.849153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.849191] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:09.482 [2024-07-11 18:31:55.849218] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:09.482 [2024-07-11 18:31:55.849279] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:09.482 [2024-07-11 18:31:55.849303] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:09.482 [2024-07-11 18:31:55.849400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.482 [2024-07-11 18:31:55.849424] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.482 [2024-07-11 18:31:55.849437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:09.482 [2024-07-11 18:31:55.849460] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.482 [2024-07-11 18:31:55.849472] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.482 [2024-07-11 18:31:55.849496] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.482 [2024-07-11 18:31:55.849506] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.482 [2024-07-11 18:31:55.849515] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.482 [2024-07-11 18:31:55.849524] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.482 [2024-07-11 18:31:55.849534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.849552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.482 [2024-07-11 18:31:55.849567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:25:09.482 [2024-07-11 18:31:55.849578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.849659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.482 [2024-07-11 18:31:55.849684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.482 [2024-07-11 18:31:55.849694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:09.482 [2024-07-11 18:31:55.849703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.482 [2024-07-11 18:31:55.849792] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.482 [2024-07-11 18:31:55.849807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.482 [2024-07-11 18:31:55.849819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.482 [2024-07-11 18:31:55.849833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.482 [2024-07-11 18:31:55.849844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.482 [2024-07-11 18:31:55.849853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.482 [2024-07-11 18:31:55.849862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.482 [2024-07-11 18:31:55.849871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.482 [2024-07-11 18:31:55.849879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.482 [2024-07-11 18:31:55.849888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.482 [2024-07-11 18:31:55.849899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.482 [2024-07-11 18:31:55.849908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.482 [2024-07-11 18:31:55.849917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.482 [2024-07-11 18:31:55.849925] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.482 [2024-07-11 18:31:55.849934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.482 [2024-07-11 18:31:55.849942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.482 [2024-07-11 18:31:55.849951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.482 [2024-07-11 18:31:55.849959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.482 [2024-07-11 18:31:55.849968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.482 [2024-07-11 18:31:55.849976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.482 [2024-07-11 18:31:55.849985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.482 [2024-07-11 18:31:55.849993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.482 [2024-07-11 18:31:55.850001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.482 [2024-07-11 18:31:55.850010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.482 [2024-07-11 18:31:55.850018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.482 [2024-07-11 18:31:55.850027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.482 [2024-07-11 18:31:55.850039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.482 [2024-07-11 18:31:55.850048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.482 [2024-07-11 18:31:55.850057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.482 [2024-07-11 18:31:55.850065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.482 [2024-07-11 18:31:55.850074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.482 [2024-07-11 18:31:55.850082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.482 [2024-07-11 18:31:55.850091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.482 [2024-07-11 18:31:55.850099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.482 [2024-07-11 18:31:55.850107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.482 [2024-07-11 18:31:55.850116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.482 [2024-07-11 18:31:55.850125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.482 [2024-07-11 18:31:55.850146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.482 [2024-07-11 18:31:55.850159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.482 [2024-07-11 18:31:55.850168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.482 [2024-07-11 18:31:55.850177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.482 [2024-07-11 18:31:55.850186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.482 [2024-07-11 18:31:55.850198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.482 [2024-07-11 18:31:55.850207] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.482 [2024-07-11 18:31:55.850216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.482 [2024-07-11 18:31:55.850225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.482 [2024-07-11 18:31:55.850235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.482 [2024-07-11 18:31:55.850244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.482 [2024-07-11 18:31:55.850253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.482 [2024-07-11 18:31:55.850261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.482 [2024-07-11 18:31:55.850270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.482 [2024-07-11 18:31:55.850279] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.482 [2024-07-11 18:31:55.850288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.482 [2024-07-11 18:31:55.850297] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.482 [2024-07-11 18:31:55.850309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.482 [2024-07-11 18:31:55.850320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.482 [2024-07-11 18:31:55.850330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.482 [2024-07-11 18:31:55.850339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.482 [2024-07-11 18:31:55.850351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.482 [2024-07-11 18:31:55.850361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.482 [2024-07-11 18:31:55.850371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.482 [2024-07-11 18:31:55.850380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.482 [2024-07-11 18:31:55.850390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.482 [2024-07-11 18:31:55.850400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.482 [2024-07-11 18:31:55.850409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.482 [2024-07-11 18:31:55.850418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.482 [2024-07-11 18:31:55.850427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.483 [2024-07-11 18:31:55.850437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.483 [2024-07-11 18:31:55.850447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.483 [2024-07-11 18:31:55.850456] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.483 [2024-07-11 18:31:55.850467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.483 [2024-07-11 18:31:55.850488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.483 [2024-07-11 18:31:55.850498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.483 [2024-07-11 18:31:55.850525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.483 [2024-07-11 18:31:55.850553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.483 [2024-07-11 18:31:55.850565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.850574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.483 [2024-07-11 18:31:55.850587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:25:09.483 [2024-07-11 18:31:55.850596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.869170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.869238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:09.483 [2024-07-11 18:31:55.869263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:25:09.483 [2024-07-11 18:31:55.869279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.869427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.869446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:09.483 [2024-07-11 18:31:55.869461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:09.483 [2024-07-11 18:31:55.869482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.878659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.878714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:09.483 [2024-07-11 18:31:55.878750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.079 ms 00:25:09.483 [2024-07-11 18:31:55.878776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.878844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.878863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:09.483 [2024-07-11 18:31:55.878885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:09.483 [2024-07-11 18:31:55.878898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.879336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.879381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:09.483 [2024-07-11 18:31:55.879410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:25:09.483 [2024-07-11 18:31:55.879424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.879632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.879656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:09.483 [2024-07-11 18:31:55.879668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:25:09.483 [2024-07-11 18:31:55.879681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.884162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.884198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:09.483 [2024-07-11 18:31:55.884213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.436 ms 00:25:09.483 [2024-07-11 18:31:55.884223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.483 [2024-07-11 18:31:55.886505] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:09.483 [2024-07-11 18:31:55.886566] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:09.483 [2024-07-11 18:31:55.886584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.483 [2024-07-11 18:31:55.886595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:09.483 [2024-07-11 18:31:55.886605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:25:09.483 [2024-07-11 18:31:55.886614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.900956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.901018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:09.741 [2024-07-11 18:31:55.901036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.298 ms 00:25:09.741 [2024-07-11 18:31:55.901046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.902965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.903001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:09.741 [2024-07-11 18:31:55.903030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:25:09.741 [2024-07-11 18:31:55.903040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.904799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.904838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:09.741 [2024-07-11 18:31:55.904853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:25:09.741 [2024-07-11 18:31:55.904863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.905237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.905256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:09.741 [2024-07-11 18:31:55.905284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:25:09.741 [2024-07-11 18:31:55.905306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.920297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.920365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:09.741 [2024-07-11 18:31:55.920384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.961 ms 00:25:09.741 [2024-07-11 18:31:55.920394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.927071] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:09.741 [2024-07-11 18:31:55.929224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.929257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:09.741 [2024-07-11 18:31:55.929271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.773 ms 00:25:09.741 [2024-07-11 18:31:55.929281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.929339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.929355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:09.741 [2024-07-11 18:31:55.929374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:09.741 [2024-07-11 18:31:55.929384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.741 [2024-07-11 18:31:55.929952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.741 [2024-07-11 18:31:55.929975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:09.741 [2024-07-11 18:31:55.929987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:25:09.741 [2024-07-11 18:31:55.929996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.742 [2024-07-11 18:31:55.930026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.742 [2024-07-11 18:31:55.930038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:09.742 [2024-07-11 18:31:55.930048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:09.742 [2024-07-11 18:31:55.930057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.742 [2024-07-11 18:31:55.930135] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:09.742 [2024-07-11 18:31:55.930178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.742 [2024-07-11 18:31:55.930192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:09.742 [2024-07-11 18:31:55.930203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:09.742 [2024-07-11 18:31:55.930212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.742 [2024-07-11 18:31:55.933454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.742 [2024-07-11 18:31:55.933493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:09.742 [2024-07-11 18:31:55.933508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:25:09.742 [2024-07-11 18:31:55.933519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.742 [2024-07-11 18:31:55.933603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.742 [2024-07-11 18:31:55.933618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:09.742 [2024-07-11 18:31:55.933636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:09.742 [2024-07-11 18:31:55.933645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.742 [2024-07-11 18:31:55.935084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.667 ms, result 0 00:25:55.303  Copying: 24/1024 [MB] (24 MBps) Copying: 46/1024 [MB] (22 MBps) Copying: 69/1024 [MB] (22 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 114/1024 [MB] (22 MBps) Copying: 137/1024 [MB] (22 MBps) Copying: 159/1024 [MB] (22 MBps) Copying: 182/1024 [MB] (22 MBps) Copying: 205/1024 [MB] (23 MBps) Copying: 228/1024 [MB] (22 MBps) Copying: 250/1024 [MB] (22 MBps) Copying: 273/1024 [MB] (22 MBps) Copying: 296/1024 [MB] (22 MBps) Copying: 318/1024 [MB] (22 MBps) Copying: 341/1024 [MB] (22 MBps) Copying: 363/1024 [MB] (22 MBps) Copying: 385/1024 [MB] (22 MBps) Copying: 408/1024 [MB] (22 MBps) Copying: 430/1024 [MB] (22 MBps) Copying: 453/1024 [MB] (22 MBps) Copying: 476/1024 [MB] (22 MBps) Copying: 498/1024 [MB] (22 MBps) Copying: 521/1024 [MB] (22 MBps) Copying: 543/1024 [MB] (21 MBps) Copying: 565/1024 [MB] (22 MBps) Copying: 588/1024 [MB] (22 MBps) Copying: 611/1024 [MB] (22 MBps) Copying: 633/1024 [MB] (22 MBps) Copying: 655/1024 [MB] (22 MBps) Copying: 678/1024 [MB] (22 MBps) Copying: 700/1024 [MB] (22 MBps) Copying: 722/1024 [MB] (22 MBps) Copying: 745/1024 [MB] (22 MBps) Copying: 768/1024 [MB] (22 MBps) Copying: 790/1024 [MB] (22 MBps) Copying: 813/1024 [MB] (22 MBps) Copying: 836/1024 [MB] (22 MBps) Copying: 859/1024 [MB] (23 MBps) Copying: 881/1024 [MB] (22 MBps) Copying: 904/1024 [MB] (22 MBps) Copying: 927/1024 [MB] (22 MBps) Copying: 949/1024 [MB] (22 MBps) Copying: 972/1024 [MB] (22 MBps) Copying: 994/1024 [MB] (22 MBps) Copying: 1017/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-11 18:32:41.676945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.677315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:55.303 [2024-07-11 18:32:41.677479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:55.303 [2024-07-11 18:32:41.677653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.677753] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:55.303 [2024-07-11 18:32:41.678554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.678591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:55.303 [2024-07-11 18:32:41.678610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:25:55.303 [2024-07-11 18:32:41.678625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.678916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.678938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:55.303 [2024-07-11 18:32:41.678973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:25:55.303 [2024-07-11 18:32:41.678988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.683351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.683394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:55.303 [2024-07-11 18:32:41.683412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.323 ms 00:25:55.303 [2024-07-11 18:32:41.683427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.692261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.692298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:55.303 [2024-07-11 18:32:41.692313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.806 ms 00:25:55.303 [2024-07-11 18:32:41.692330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.693872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.693941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:55.303 [2024-07-11 18:32:41.693955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:25:55.303 [2024-07-11 18:32:41.693965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.697132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.697175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:55.303 [2024-07-11 18:32:41.697190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:25:55.303 [2024-07-11 18:32:41.697200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.700868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.700923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:55.303 [2024-07-11 18:32:41.700946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:25:55.303 [2024-07-11 18:32:41.700957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.702832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.702883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:55.303 [2024-07-11 18:32:41.702913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:25:55.303 [2024-07-11 18:32:41.702922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.704368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.704399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:55.303 [2024-07-11 18:32:41.704428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:25:55.303 [2024-07-11 18:32:41.704437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.705566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.705600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:55.303 [2024-07-11 18:32:41.705613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:25:55.303 [2024-07-11 18:32:41.705622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.706724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.303 [2024-07-11 18:32:41.706762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:55.303 [2024-07-11 18:32:41.706791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:25:55.303 [2024-07-11 18:32:41.706800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.303 [2024-07-11 18:32:41.706818] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:55.303 [2024-07-11 18:32:41.706834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:55.303 [2024-07-11 18:32:41.706847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:25:55.303 [2024-07-11 18:32:41.706856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.706994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:55.303 [2024-07-11 18:32:41.707278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:55.304 [2024-07-11 18:32:41.707952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:55.304 [2024-07-11 18:32:41.707962] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 85c256d6-5302-4f92-a9ec-36f2d09512df 00:25:55.304 [2024-07-11 18:32:41.707972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:25:55.304 [2024-07-11 18:32:41.707982] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:55.304 [2024-07-11 18:32:41.707991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:55.304 [2024-07-11 18:32:41.708001] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:55.304 [2024-07-11 18:32:41.708010] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:55.304 [2024-07-11 18:32:41.708025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:55.304 [2024-07-11 18:32:41.708035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:55.304 [2024-07-11 18:32:41.708043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:55.304 [2024-07-11 18:32:41.708052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:55.304 [2024-07-11 18:32:41.708062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.304 [2024-07-11 18:32:41.708072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:55.304 [2024-07-11 18:32:41.708082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:25:55.304 [2024-07-11 18:32:41.708092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.304 [2024-07-11 18:32:41.709340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.304 [2024-07-11 18:32:41.709367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:55.304 [2024-07-11 18:32:41.709379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:25:55.304 [2024-07-11 18:32:41.709396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.304 [2024-07-11 18:32:41.709490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.304 [2024-07-11 18:32:41.709506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:55.304 [2024-07-11 18:32:41.709531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:55.304 [2024-07-11 18:32:41.709540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.304 [2024-07-11 18:32:41.713910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.304 [2024-07-11 18:32:41.714105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.304 [2024-07-11 18:32:41.714274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.304 [2024-07-11 18:32:41.714396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.304 [2024-07-11 18:32:41.714559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.304 [2024-07-11 18:32:41.714646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.304 [2024-07-11 18:32:41.714825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.304 [2024-07-11 18:32:41.714948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.304 [2024-07-11 18:32:41.715067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.715243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.561 [2024-07-11 18:32:41.715267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.715286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.715314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.715327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.561 [2024-07-11 18:32:41.715367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.715393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.723592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.723644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.561 [2024-07-11 18:32:41.723698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.723709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.729816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.729861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.561 [2024-07-11 18:32:41.729876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.729886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.729921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.729934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:55.561 [2024-07-11 18:32:41.729944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.729953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.730016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.730030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:55.561 [2024-07-11 18:32:41.730040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.730049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.730168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.730186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:55.561 [2024-07-11 18:32:41.730224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.730234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.730288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.730303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:55.561 [2024-07-11 18:32:41.730314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.730323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.730364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.730390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:55.561 [2024-07-11 18:32:41.730407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.730416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.730502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.561 [2024-07-11 18:32:41.730533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:55.561 [2024-07-11 18:32:41.730543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.561 [2024-07-11 18:32:41.730553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.561 [2024-07-11 18:32:41.730682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.713 ms, result 0 00:25:55.561 00:25:55.561 00:25:55.561 18:32:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:57.457 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:57.457 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:57.457 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:57.457 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:57.457 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:57.457 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:57.715 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:57.715 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:57.715 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 93308 00:25:57.715 18:32:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 93308 ']' 00:25:57.715 18:32:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 93308 00:25:57.715 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93308) - No such process 00:25:57.715 Process with pid 93308 is not found 00:25:57.715 18:32:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 93308 is not found' 00:25:57.715 18:32:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:57.973 Remove shared memory files 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:57.973 00:25:57.973 real 3m49.998s 00:25:57.973 user 4m25.427s 00:25:57.973 sys 0m34.440s 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.973 18:32:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:57.973 ************************************ 00:25:57.973 END TEST ftl_dirty_shutdown 00:25:57.973 ************************************ 00:25:57.973 18:32:44 ftl -- common/autotest_common.sh@1142 -- # return 0 00:25:57.973 18:32:44 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:57.973 18:32:44 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:57.973 18:32:44 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.973 18:32:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:57.973 ************************************ 00:25:57.973 START TEST ftl_upgrade_shutdown 00:25:57.973 ************************************ 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:57.973 * Looking for test storage... 00:25:57.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95719 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95719 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:57.973 18:32:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 95719 ']' 00:25:57.974 18:32:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.974 18:32:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:57.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.974 18:32:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.974 18:32:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:57.974 18:32:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:58.232 [2024-07-11 18:32:44.479097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:58.232 [2024-07-11 18:32:44.479294] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95719 ] 00:25:58.232 [2024-07-11 18:32:44.626530] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.490 [2024-07-11 18:32:44.662987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.056 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.056 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:25:59.056 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:59.057 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:59.315 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:59.574 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:59.574 { 00:25:59.574 "name": "basen1", 00:25:59.574 "aliases": [ 00:25:59.574 "63218426-e022-49e8-9c04-653d5dc226bb" 00:25:59.574 ], 00:25:59.574 "product_name": "NVMe disk", 00:25:59.574 "block_size": 4096, 00:25:59.574 "num_blocks": 1310720, 00:25:59.574 "uuid": "63218426-e022-49e8-9c04-653d5dc226bb", 00:25:59.574 "assigned_rate_limits": { 00:25:59.574 "rw_ios_per_sec": 0, 00:25:59.574 "rw_mbytes_per_sec": 0, 00:25:59.574 "r_mbytes_per_sec": 0, 00:25:59.574 "w_mbytes_per_sec": 0 00:25:59.574 }, 00:25:59.574 "claimed": true, 00:25:59.574 "claim_type": "read_many_write_one", 00:25:59.574 "zoned": false, 00:25:59.574 "supported_io_types": { 00:25:59.574 "read": true, 00:25:59.574 "write": true, 00:25:59.574 "unmap": true, 00:25:59.574 "flush": true, 00:25:59.574 "reset": true, 00:25:59.574 "nvme_admin": true, 00:25:59.574 "nvme_io": true, 00:25:59.574 "nvme_io_md": false, 00:25:59.574 "write_zeroes": true, 00:25:59.574 "zcopy": false, 00:25:59.574 "get_zone_info": false, 00:25:59.574 "zone_management": false, 00:25:59.574 "zone_append": false, 00:25:59.574 "compare": true, 00:25:59.574 "compare_and_write": false, 00:25:59.574 "abort": true, 00:25:59.574 "seek_hole": false, 00:25:59.574 "seek_data": false, 00:25:59.574 "copy": true, 00:25:59.574 "nvme_iov_md": false 00:25:59.574 }, 00:25:59.574 "driver_specific": { 00:25:59.574 "nvme": [ 00:25:59.574 { 00:25:59.574 "pci_address": "0000:00:11.0", 00:25:59.574 "trid": { 00:25:59.574 "trtype": "PCIe", 00:25:59.574 "traddr": "0000:00:11.0" 00:25:59.574 }, 00:25:59.574 "ctrlr_data": { 00:25:59.574 "cntlid": 0, 00:25:59.574 "vendor_id": "0x1b36", 00:25:59.574 "model_number": "QEMU NVMe Ctrl", 00:25:59.574 "serial_number": "12341", 00:25:59.574 "firmware_revision": "8.0.0", 00:25:59.574 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:59.574 "oacs": { 00:25:59.574 "security": 0, 00:25:59.574 "format": 1, 00:25:59.574 "firmware": 0, 00:25:59.574 "ns_manage": 1 00:25:59.574 }, 00:25:59.574 "multi_ctrlr": false, 00:25:59.574 "ana_reporting": false 00:25:59.574 }, 00:25:59.574 "vs": { 00:25:59.574 "nvme_version": "1.4" 00:25:59.574 }, 00:25:59.574 "ns_data": { 00:25:59.574 "id": 1, 00:25:59.574 "can_share": false 00:25:59.574 } 00:25:59.574 } 00:25:59.574 ], 00:25:59.574 "mp_policy": "active_passive" 00:25:59.574 } 00:25:59.574 } 00:25:59.574 ]' 00:25:59.574 18:32:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:59.832 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:00.089 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=49720080-7b36-4a84-be68-319b606f9891 00:26:00.089 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:00.089 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49720080-7b36-4a84-be68-319b606f9891 00:26:00.347 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=3dc550b8-bb8b-4e86-86bf-fa42d1c26780 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 3dc550b8-bb8b-4e86-86bf-fa42d1c26780 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=fede3240-8fcc-42ef-b65f-c854332203a5 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z fede3240-8fcc-42ef-b65f-c854332203a5 ]] 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 fede3240-8fcc-42ef-b65f-c854332203a5 5120 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=fede3240-8fcc-42ef-b65f-c854332203a5 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size fede3240-8fcc-42ef-b65f-c854332203a5 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=fede3240-8fcc-42ef-b65f-c854332203a5 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:00.606 18:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fede3240-8fcc-42ef-b65f-c854332203a5 00:26:00.863 18:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:00.864 { 00:26:00.864 "name": "fede3240-8fcc-42ef-b65f-c854332203a5", 00:26:00.864 "aliases": [ 00:26:00.864 "lvs/basen1p0" 00:26:00.864 ], 00:26:00.864 "product_name": "Logical Volume", 00:26:00.864 "block_size": 4096, 00:26:00.864 "num_blocks": 5242880, 00:26:00.864 "uuid": "fede3240-8fcc-42ef-b65f-c854332203a5", 00:26:00.864 "assigned_rate_limits": { 00:26:00.864 "rw_ios_per_sec": 0, 00:26:00.864 "rw_mbytes_per_sec": 0, 00:26:00.864 "r_mbytes_per_sec": 0, 00:26:00.864 "w_mbytes_per_sec": 0 00:26:00.864 }, 00:26:00.864 "claimed": false, 00:26:00.864 "zoned": false, 00:26:00.864 "supported_io_types": { 00:26:00.864 "read": true, 00:26:00.864 "write": true, 00:26:00.864 "unmap": true, 00:26:00.864 "flush": false, 00:26:00.864 "reset": true, 00:26:00.864 "nvme_admin": false, 00:26:00.864 "nvme_io": false, 00:26:00.864 "nvme_io_md": false, 00:26:00.864 "write_zeroes": true, 00:26:00.864 "zcopy": false, 00:26:00.864 "get_zone_info": false, 00:26:00.864 "zone_management": false, 00:26:00.864 "zone_append": false, 00:26:00.864 "compare": false, 00:26:00.864 "compare_and_write": false, 00:26:00.864 "abort": false, 00:26:00.864 "seek_hole": true, 00:26:00.864 "seek_data": true, 00:26:00.864 "copy": false, 00:26:00.864 "nvme_iov_md": false 00:26:00.864 }, 00:26:00.864 "driver_specific": { 00:26:00.864 "lvol": { 00:26:00.864 "lvol_store_uuid": "3dc550b8-bb8b-4e86-86bf-fa42d1c26780", 00:26:00.864 "base_bdev": "basen1", 00:26:00.864 "thin_provision": true, 00:26:00.864 "num_allocated_clusters": 0, 00:26:00.864 "snapshot": false, 00:26:00.864 "clone": false, 00:26:00.864 "esnap_clone": false 00:26:00.864 } 00:26:00.864 } 00:26:00.864 } 00:26:00.864 ]' 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:00.864 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:01.122 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:01.122 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:01.122 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:01.380 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:01.380 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:01.380 18:32:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d fede3240-8fcc-42ef-b65f-c854332203a5 -c cachen1p0 --l2p_dram_limit 2 00:26:01.639 [2024-07-11 18:32:48.003271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.639 [2024-07-11 18:32:48.003331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:01.639 [2024-07-11 18:32:48.003403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:01.639 [2024-07-11 18:32:48.003417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.639 [2024-07-11 18:32:48.003507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.639 [2024-07-11 18:32:48.003527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:01.639 [2024-07-11 18:32:48.003553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:26:01.639 [2024-07-11 18:32:48.003564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.639 [2024-07-11 18:32:48.003598] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:01.639 [2024-07-11 18:32:48.003914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:01.639 [2024-07-11 18:32:48.003943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.003961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:01.640 [2024-07-11 18:32:48.003975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:26:01.640 [2024-07-11 18:32:48.003986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.004138] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b61277fd-2124-4816-a711-7d3f4143c0e1 00:26:01.640 [2024-07-11 18:32:48.005002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.005027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:01.640 [2024-07-11 18:32:48.005041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:01.640 [2024-07-11 18:32:48.005053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.009139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.009177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:01.640 [2024-07-11 18:32:48.009191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.025 ms 00:26:01.640 [2024-07-11 18:32:48.009203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.009260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.009284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:01.640 [2024-07-11 18:32:48.009295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:01.640 [2024-07-11 18:32:48.009307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.009380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.009414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:01.640 [2024-07-11 18:32:48.009426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:01.640 [2024-07-11 18:32:48.009438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.009468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:01.640 [2024-07-11 18:32:48.010929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.010954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:01.640 [2024-07-11 18:32:48.010970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.466 ms 00:26:01.640 [2024-07-11 18:32:48.010980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.011016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.011030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:01.640 [2024-07-11 18:32:48.011044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:01.640 [2024-07-11 18:32:48.011055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.011117] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:01.640 [2024-07-11 18:32:48.011262] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:01.640 [2024-07-11 18:32:48.011285] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:01.640 [2024-07-11 18:32:48.011299] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:01.640 [2024-07-11 18:32:48.011316] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:01.640 [2024-07-11 18:32:48.011329] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:01.640 [2024-07-11 18:32:48.011342] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:01.640 [2024-07-11 18:32:48.011397] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:01.640 [2024-07-11 18:32:48.011410] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:01.640 [2024-07-11 18:32:48.011421] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:01.640 [2024-07-11 18:32:48.011434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.011445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:01.640 [2024-07-11 18:32:48.011458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.323 ms 00:26:01.640 [2024-07-11 18:32:48.011470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.011557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.640 [2024-07-11 18:32:48.011572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:01.640 [2024-07-11 18:32:48.011588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:26:01.640 [2024-07-11 18:32:48.011599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.640 [2024-07-11 18:32:48.011745] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:01.640 [2024-07-11 18:32:48.011777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:01.640 [2024-07-11 18:32:48.011790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:01.640 [2024-07-11 18:32:48.011801] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.011813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:01.640 [2024-07-11 18:32:48.011823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.011834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:01.640 [2024-07-11 18:32:48.011844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:01.640 [2024-07-11 18:32:48.011857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:01.640 [2024-07-11 18:32:48.011868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.011879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:01.640 [2024-07-11 18:32:48.011890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:01.640 [2024-07-11 18:32:48.011901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.011910] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:01.640 [2024-07-11 18:32:48.011924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:01.640 [2024-07-11 18:32:48.011933] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.011944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:01.640 [2024-07-11 18:32:48.011953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:01.640 [2024-07-11 18:32:48.011964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.011973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:01.640 [2024-07-11 18:32:48.011984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:01.640 [2024-07-11 18:32:48.011993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:01.640 [2024-07-11 18:32:48.012003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:01.640 [2024-07-11 18:32:48.012013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:01.640 [2024-07-11 18:32:48.012023] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:01.640 [2024-07-11 18:32:48.012033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:01.640 [2024-07-11 18:32:48.012044] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:01.640 [2024-07-11 18:32:48.012053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:01.640 [2024-07-11 18:32:48.012064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:01.640 [2024-07-11 18:32:48.012073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:01.640 [2024-07-11 18:32:48.012085] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:01.640 [2024-07-11 18:32:48.012094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:01.640 [2024-07-11 18:32:48.012105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:01.640 [2024-07-11 18:32:48.012114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.012141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:01.640 [2024-07-11 18:32:48.012154] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:01.640 [2024-07-11 18:32:48.012165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.012175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:01.640 [2024-07-11 18:32:48.012186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:01.640 [2024-07-11 18:32:48.012195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.640 [2024-07-11 18:32:48.012207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:01.640 [2024-07-11 18:32:48.012217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:01.641 [2024-07-11 18:32:48.012227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.641 [2024-07-11 18:32:48.012236] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:01.641 [2024-07-11 18:32:48.012248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:01.641 [2024-07-11 18:32:48.012260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:01.641 [2024-07-11 18:32:48.012274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:01.641 [2024-07-11 18:32:48.012293] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:01.641 [2024-07-11 18:32:48.012305] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:01.641 [2024-07-11 18:32:48.012316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:01.641 [2024-07-11 18:32:48.012327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:01.641 [2024-07-11 18:32:48.012336] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:01.641 [2024-07-11 18:32:48.012347] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:01.641 [2024-07-11 18:32:48.012361] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:01.641 [2024-07-11 18:32:48.012379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:01.641 [2024-07-11 18:32:48.012411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:01.641 [2024-07-11 18:32:48.012467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:01.641 [2024-07-11 18:32:48.012478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:01.641 [2024-07-11 18:32:48.012488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:01.641 [2024-07-11 18:32:48.012501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:01.641 [2024-07-11 18:32:48.012572] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:01.641 [2024-07-11 18:32:48.012584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:01.641 [2024-07-11 18:32:48.012606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:01.641 [2024-07-11 18:32:48.012618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:01.641 [2024-07-11 18:32:48.012631] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:01.641 [2024-07-11 18:32:48.012642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.641 [2024-07-11 18:32:48.012654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:01.641 [2024-07-11 18:32:48.012665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.971 ms 00:26:01.641 [2024-07-11 18:32:48.012679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.641 [2024-07-11 18:32:48.012735] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:01.641 [2024-07-11 18:32:48.012753] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:05.826 [2024-07-11 18:32:51.360951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.361030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:05.826 [2024-07-11 18:32:51.361049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3348.235 ms 00:26:05.826 [2024-07-11 18:32:51.361062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.368051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.368109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:05.826 [2024-07-11 18:32:51.368128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.861 ms 00:26:05.826 [2024-07-11 18:32:51.368154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.368210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.368230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:05.826 [2024-07-11 18:32:51.368242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:05.826 [2024-07-11 18:32:51.368254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.376254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.376310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:05.826 [2024-07-11 18:32:51.376326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.931 ms 00:26:05.826 [2024-07-11 18:32:51.376338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.376379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.376397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:05.826 [2024-07-11 18:32:51.376409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:05.826 [2024-07-11 18:32:51.376421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.376773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.376793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:05.826 [2024-07-11 18:32:51.376806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:26:05.826 [2024-07-11 18:32:51.376818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.376866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.376884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:05.826 [2024-07-11 18:32:51.376896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:05.826 [2024-07-11 18:32:51.376921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.382340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.382391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:05.826 [2024-07-11 18:32:51.382406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.396 ms 00:26:05.826 [2024-07-11 18:32:51.382419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.390101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:05.826 [2024-07-11 18:32:51.390943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.390971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:05.826 [2024-07-11 18:32:51.391003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.431 ms 00:26:05.826 [2024-07-11 18:32:51.391014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.425811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.425869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:05.826 [2024-07-11 18:32:51.425892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.764 ms 00:26:05.826 [2024-07-11 18:32:51.425904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.826 [2024-07-11 18:32:51.426054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.826 [2024-07-11 18:32:51.426128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:05.826 [2024-07-11 18:32:51.426162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:26:05.826 [2024-07-11 18:32:51.426182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.429673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.429724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:05.827 [2024-07-11 18:32:51.429743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.456 ms 00:26:05.827 [2024-07-11 18:32:51.429756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.433379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.433415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:05.827 [2024-07-11 18:32:51.433433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.574 ms 00:26:05.827 [2024-07-11 18:32:51.433459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.433838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.433859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:05.827 [2024-07-11 18:32:51.433873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:26:05.827 [2024-07-11 18:32:51.433884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.489611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.489679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:05.827 [2024-07-11 18:32:51.489701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.691 ms 00:26:05.827 [2024-07-11 18:32:51.489715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.493862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.493912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:05.827 [2024-07-11 18:32:51.493930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.111 ms 00:26:05.827 [2024-07-11 18:32:51.493941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.497672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.497721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:05.827 [2024-07-11 18:32:51.497737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.700 ms 00:26:05.827 [2024-07-11 18:32:51.497747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.501460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.501511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:05.827 [2024-07-11 18:32:51.501529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.683 ms 00:26:05.827 [2024-07-11 18:32:51.501539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.501575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.501590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:05.827 [2024-07-11 18:32:51.501604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:05.827 [2024-07-11 18:32:51.501614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.501714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.827 [2024-07-11 18:32:51.501729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:05.827 [2024-07-11 18:32:51.501742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:26:05.827 [2024-07-11 18:32:51.501765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.827 [2024-07-11 18:32:51.502918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3499.157 ms, result 0 00:26:05.827 { 00:26:05.827 "name": "ftl", 00:26:05.827 "uuid": "b61277fd-2124-4816-a711-7d3f4143c0e1" 00:26:05.827 } 00:26:05.827 18:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:05.827 [2024-07-11 18:32:51.771573] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.827 18:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:05.827 18:32:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:05.827 [2024-07-11 18:32:52.232045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:06.086 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:06.086 [2024-07-11 18:32:52.440635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:06.086 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:06.653 Fill FTL, iteration 1 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95841 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95841 /var/tmp/spdk.tgt.sock 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 95841 ']' 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.653 18:32:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:06.653 [2024-07-11 18:32:52.919024] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:06.653 [2024-07-11 18:32:52.919186] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95841 ] 00:26:06.653 [2024-07-11 18:32:53.060517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.911 [2024-07-11 18:32:53.101797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.478 18:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:07.478 18:32:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:07.478 18:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:07.736 ftln1 00:26:07.736 18:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:07.736 18:32:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:07.995 18:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95841 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 95841 ']' 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 95841 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95841 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:07.996 killing process with pid 95841 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95841' 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 95841 00:26:07.996 18:32:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 95841 00:26:08.254 18:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:08.254 18:32:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:08.254 [2024-07-11 18:32:54.618003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:08.254 [2024-07-11 18:32:54.618169] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95872 ] 00:26:08.513 [2024-07-11 18:32:54.756726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.513 [2024-07-11 18:32:54.787809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.636  Copying: 217/1024 [MB] (217 MBps) Copying: 435/1024 [MB] (218 MBps) Copying: 649/1024 [MB] (214 MBps) Copying: 867/1024 [MB] (218 MBps) Copying: 1024/1024 [MB] (average 216 MBps) 00:26:13.636 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:13.636 Calculate MD5 checksum, iteration 1 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:13.636 18:32:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:13.636 [2024-07-11 18:32:59.988474] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:13.636 [2024-07-11 18:32:59.988654] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95925 ] 00:26:13.895 [2024-07-11 18:33:00.137805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.895 [2024-07-11 18:33:00.171145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.470  Copying: 453/1024 [MB] (453 MBps) Copying: 908/1024 [MB] (455 MBps) Copying: 1024/1024 [MB] (average 455 MBps) 00:26:16.470 00:26:16.470 18:33:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:16.470 18:33:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:18.425 Fill FTL, iteration 2 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=024dc100ecdfcb76b8b1ceac7710cec8 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:18.425 18:33:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:18.425 [2024-07-11 18:33:04.816498] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:18.425 [2024-07-11 18:33:04.816699] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95981 ] 00:26:18.683 [2024-07-11 18:33:04.966214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.684 [2024-07-11 18:33:05.006923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.795  Copying: 218/1024 [MB] (218 MBps) Copying: 433/1024 [MB] (215 MBps) Copying: 647/1024 [MB] (214 MBps) Copying: 863/1024 [MB] (216 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:26:23.795 00:26:23.795 Calculate MD5 checksum, iteration 2 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:23.795 18:33:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:24.053 [2024-07-11 18:33:10.233721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:24.053 [2024-07-11 18:33:10.233921] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96034 ] 00:26:24.053 [2024-07-11 18:33:10.380843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.053 [2024-07-11 18:33:10.413318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.190  Copying: 464/1024 [MB] (464 MBps) Copying: 920/1024 [MB] (456 MBps) Copying: 1024/1024 [MB] (average 459 MBps) 00:26:27.190 00:26:27.448 18:33:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:27.448 18:33:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:29.347 18:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:29.347 18:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5608e81063ce2981fb41dcd812ed704a 00:26:29.347 18:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:29.347 18:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:29.347 18:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:29.604 [2024-07-11 18:33:15.770119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.604 [2024-07-11 18:33:15.770235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:29.604 [2024-07-11 18:33:15.770271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:29.604 [2024-07-11 18:33:15.770282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.604 [2024-07-11 18:33:15.770316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.604 [2024-07-11 18:33:15.770339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:29.604 [2024-07-11 18:33:15.770359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:29.604 [2024-07-11 18:33:15.770370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.604 [2024-07-11 18:33:15.770397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.604 [2024-07-11 18:33:15.770410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:29.604 [2024-07-11 18:33:15.770421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:29.604 [2024-07-11 18:33:15.770430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.605 [2024-07-11 18:33:15.770546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.382 ms, result 0 00:26:29.605 true 00:26:29.605 18:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:29.605 { 00:26:29.605 "name": "ftl", 00:26:29.605 "properties": [ 00:26:29.605 { 00:26:29.605 "name": "superblock_version", 00:26:29.605 "value": 5, 00:26:29.605 "read-only": true 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "name": "base_device", 00:26:29.605 "bands": [ 00:26:29.605 { 00:26:29.605 "id": 0, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 1, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 2, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 3, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 4, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 5, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 6, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 7, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 8, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 9, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 10, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 11, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 12, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 13, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 14, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 15, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 16, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 17, 00:26:29.605 "state": "FREE", 00:26:29.605 "validity": 0.0 00:26:29.605 } 00:26:29.605 ], 00:26:29.605 "read-only": true 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "name": "cache_device", 00:26:29.605 "type": "bdev", 00:26:29.605 "chunks": [ 00:26:29.605 { 00:26:29.605 "id": 0, 00:26:29.605 "state": "INACTIVE", 00:26:29.605 "utilization": 0.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 1, 00:26:29.605 "state": "CLOSED", 00:26:29.605 "utilization": 1.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 2, 00:26:29.605 "state": "CLOSED", 00:26:29.605 "utilization": 1.0 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 3, 00:26:29.605 "state": "OPEN", 00:26:29.605 "utilization": 0.001953125 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "id": 4, 00:26:29.605 "state": "OPEN", 00:26:29.605 "utilization": 0.0 00:26:29.605 } 00:26:29.605 ], 00:26:29.605 "read-only": true 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "name": "verbose_mode", 00:26:29.605 "value": true, 00:26:29.605 "unit": "", 00:26:29.605 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:29.605 }, 00:26:29.605 { 00:26:29.605 "name": "prep_upgrade_on_shutdown", 00:26:29.605 "value": false, 00:26:29.605 "unit": "", 00:26:29.605 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:29.605 } 00:26:29.605 ] 00:26:29.605 } 00:26:29.605 18:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:29.863 [2024-07-11 18:33:16.174569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.863 [2024-07-11 18:33:16.174635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:29.863 [2024-07-11 18:33:16.174668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:29.863 [2024-07-11 18:33:16.174678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.863 [2024-07-11 18:33:16.174709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.863 [2024-07-11 18:33:16.174724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:29.863 [2024-07-11 18:33:16.174735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:29.863 [2024-07-11 18:33:16.174745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.863 [2024-07-11 18:33:16.174769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.863 [2024-07-11 18:33:16.174781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:29.863 [2024-07-11 18:33:16.174791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:29.863 [2024-07-11 18:33:16.174800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.863 [2024-07-11 18:33:16.174867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.286 ms, result 0 00:26:29.863 true 00:26:29.863 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:29.863 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:29.863 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:30.122 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:30.122 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:30.122 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:30.381 [2024-07-11 18:33:16.639039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.381 [2024-07-11 18:33:16.639100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:30.381 [2024-07-11 18:33:16.639146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:30.381 [2024-07-11 18:33:16.639156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.381 [2024-07-11 18:33:16.639190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.381 [2024-07-11 18:33:16.639205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:30.381 [2024-07-11 18:33:16.639216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:30.381 [2024-07-11 18:33:16.639225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.381 [2024-07-11 18:33:16.639248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.381 [2024-07-11 18:33:16.639260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:30.381 [2024-07-11 18:33:16.639270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:30.381 [2024-07-11 18:33:16.639278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.381 [2024-07-11 18:33:16.639401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.292 ms, result 0 00:26:30.381 true 00:26:30.381 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:30.641 { 00:26:30.641 "name": "ftl", 00:26:30.641 "properties": [ 00:26:30.641 { 00:26:30.641 "name": "superblock_version", 00:26:30.641 "value": 5, 00:26:30.641 "read-only": true 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "name": "base_device", 00:26:30.641 "bands": [ 00:26:30.641 { 00:26:30.641 "id": 0, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 1, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 2, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 3, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 4, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 5, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 6, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 7, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 8, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 9, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 10, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 11, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 12, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 13, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 14, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 15, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 16, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 17, 00:26:30.641 "state": "FREE", 00:26:30.641 "validity": 0.0 00:26:30.641 } 00:26:30.641 ], 00:26:30.641 "read-only": true 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "name": "cache_device", 00:26:30.641 "type": "bdev", 00:26:30.641 "chunks": [ 00:26:30.641 { 00:26:30.641 "id": 0, 00:26:30.641 "state": "INACTIVE", 00:26:30.641 "utilization": 0.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 1, 00:26:30.641 "state": "CLOSED", 00:26:30.641 "utilization": 1.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 2, 00:26:30.641 "state": "CLOSED", 00:26:30.641 "utilization": 1.0 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 3, 00:26:30.641 "state": "OPEN", 00:26:30.641 "utilization": 0.001953125 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "id": 4, 00:26:30.641 "state": "OPEN", 00:26:30.641 "utilization": 0.0 00:26:30.641 } 00:26:30.641 ], 00:26:30.641 "read-only": true 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "name": "verbose_mode", 00:26:30.641 "value": true, 00:26:30.641 "unit": "", 00:26:30.641 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:30.641 }, 00:26:30.641 { 00:26:30.641 "name": "prep_upgrade_on_shutdown", 00:26:30.641 "value": true, 00:26:30.641 "unit": "", 00:26:30.641 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:30.641 } 00:26:30.641 ] 00:26:30.641 } 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95719 ]] 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95719 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 95719 ']' 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 95719 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95719 00:26:30.641 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:30.642 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:30.642 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95719' 00:26:30.642 killing process with pid 95719 00:26:30.642 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 95719 00:26:30.642 18:33:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 95719 00:26:30.642 [2024-07-11 18:33:16.985038] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:30.642 [2024-07-11 18:33:16.989690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.642 [2024-07-11 18:33:16.989747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:30.642 [2024-07-11 18:33:16.989781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:30.642 [2024-07-11 18:33:16.989792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.642 [2024-07-11 18:33:16.989821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:30.642 [2024-07-11 18:33:16.990331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.642 [2024-07-11 18:33:16.990388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:30.642 [2024-07-11 18:33:16.990402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.491 ms 00:26:30.642 [2024-07-11 18:33:16.990414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.755 [2024-07-11 18:33:25.004277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.755 [2024-07-11 18:33:25.004347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:38.755 [2024-07-11 18:33:25.004381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8013.839 ms 00:26:38.755 [2024-07-11 18:33:25.004392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.755 [2024-07-11 18:33:25.005577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.755 [2024-07-11 18:33:25.005627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:38.755 [2024-07-11 18:33:25.005641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:26:38.755 [2024-07-11 18:33:25.005653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.755 [2024-07-11 18:33:25.006828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.755 [2024-07-11 18:33:25.006869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:38.755 [2024-07-11 18:33:25.006881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.136 ms 00:26:38.755 [2024-07-11 18:33:25.006890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.755 [2024-07-11 18:33:25.008436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.755 [2024-07-11 18:33:25.008516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:38.755 [2024-07-11 18:33:25.008529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.491 ms 00:26:38.755 [2024-07-11 18:33:25.008539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.755 [2024-07-11 18:33:25.010783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.755 [2024-07-11 18:33:25.010836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:38.755 [2024-07-11 18:33:25.010882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.208 ms 00:26:38.755 [2024-07-11 18:33:25.010892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.755 [2024-07-11 18:33:25.010978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.755 [2024-07-11 18:33:25.010995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:38.755 [2024-07-11 18:33:25.011007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:38.755 [2024-07-11 18:33:25.011023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.012416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.756 [2024-07-11 18:33:25.012461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:38.756 [2024-07-11 18:33:25.012474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.374 ms 00:26:38.756 [2024-07-11 18:33:25.012484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.013656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.756 [2024-07-11 18:33:25.013721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:38.756 [2024-07-11 18:33:25.013733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.138 ms 00:26:38.756 [2024-07-11 18:33:25.013743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.014816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.756 [2024-07-11 18:33:25.014880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:38.756 [2024-07-11 18:33:25.014893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.039 ms 00:26:38.756 [2024-07-11 18:33:25.014902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.016230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.756 [2024-07-11 18:33:25.016271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:38.756 [2024-07-11 18:33:25.016284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.253 ms 00:26:38.756 [2024-07-11 18:33:25.016293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.016330] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:38.756 [2024-07-11 18:33:25.016349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:38.756 [2024-07-11 18:33:25.016376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:38.756 [2024-07-11 18:33:25.016387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:38.756 [2024-07-11 18:33:25.016397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:38.756 [2024-07-11 18:33:25.016575] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:38.756 [2024-07-11 18:33:25.016585] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b61277fd-2124-4816-a711-7d3f4143c0e1 00:26:38.756 [2024-07-11 18:33:25.016595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:38.756 [2024-07-11 18:33:25.016604] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:38.756 [2024-07-11 18:33:25.016613] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:38.756 [2024-07-11 18:33:25.016624] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:38.756 [2024-07-11 18:33:25.016633] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:38.756 [2024-07-11 18:33:25.016643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:38.756 [2024-07-11 18:33:25.016652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:38.756 [2024-07-11 18:33:25.016661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:38.756 [2024-07-11 18:33:25.016670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:38.756 [2024-07-11 18:33:25.016680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.756 [2024-07-11 18:33:25.016698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:38.756 [2024-07-11 18:33:25.016709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:26:38.756 [2024-07-11 18:33:25.016719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.018007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.756 [2024-07-11 18:33:25.018052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:38.756 [2024-07-11 18:33:25.018065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.268 ms 00:26:38.756 [2024-07-11 18:33:25.018075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.018165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.756 [2024-07-11 18:33:25.018180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:38.756 [2024-07-11 18:33:25.018192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:38.756 [2024-07-11 18:33:25.018201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.022955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.756 [2024-07-11 18:33:25.023010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:38.756 [2024-07-11 18:33:25.023035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.756 [2024-07-11 18:33:25.023045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.023110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.756 [2024-07-11 18:33:25.023125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:38.756 [2024-07-11 18:33:25.023136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.756 [2024-07-11 18:33:25.023145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.023208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.756 [2024-07-11 18:33:25.023225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:38.756 [2024-07-11 18:33:25.023236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.756 [2024-07-11 18:33:25.023261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.023304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.756 [2024-07-11 18:33:25.023318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:38.756 [2024-07-11 18:33:25.023337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.756 [2024-07-11 18:33:25.023347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.031064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.756 [2024-07-11 18:33:25.031155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:38.756 [2024-07-11 18:33:25.031183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.756 [2024-07-11 18:33:25.031194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.756 [2024-07-11 18:33:25.037293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.756 [2024-07-11 18:33:25.037355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:38.756 [2024-07-11 18:33:25.037386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.756 [2024-07-11 18:33:25.037397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.757 [2024-07-11 18:33:25.037492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.757 [2024-07-11 18:33:25.037508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:38.757 [2024-07-11 18:33:25.037518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.757 [2024-07-11 18:33:25.037527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.757 [2024-07-11 18:33:25.037563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.757 [2024-07-11 18:33:25.037595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:38.757 [2024-07-11 18:33:25.037629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.757 [2024-07-11 18:33:25.037654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.757 [2024-07-11 18:33:25.037736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.757 [2024-07-11 18:33:25.037753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:38.757 [2024-07-11 18:33:25.037764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.757 [2024-07-11 18:33:25.037774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.757 [2024-07-11 18:33:25.037816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.757 [2024-07-11 18:33:25.037832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:38.757 [2024-07-11 18:33:25.037849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.757 [2024-07-11 18:33:25.037859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.757 [2024-07-11 18:33:25.037906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.757 [2024-07-11 18:33:25.037921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:38.757 [2024-07-11 18:33:25.037931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.757 [2024-07-11 18:33:25.037941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.757 [2024-07-11 18:33:25.037992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.757 [2024-07-11 18:33:25.038008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:38.757 [2024-07-11 18:33:25.038024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.757 [2024-07-11 18:33:25.038034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.757 [2024-07-11 18:33:25.038189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8048.505 ms, result 0 00:26:42.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96209 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96209 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 96209 ']' 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.045 18:33:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:42.045 [2024-07-11 18:33:28.136589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:42.045 [2024-07-11 18:33:28.136799] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96209 ] 00:26:42.045 [2024-07-11 18:33:28.281046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.045 [2024-07-11 18:33:28.313106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.304 [2024-07-11 18:33:28.550019] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:42.304 [2024-07-11 18:33:28.550132] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:42.304 [2024-07-11 18:33:28.694261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.694302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:42.304 [2024-07-11 18:33:28.694335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:42.304 [2024-07-11 18:33:28.694345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.694414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.694431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:42.304 [2024-07-11 18:33:28.694442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:42.304 [2024-07-11 18:33:28.694457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.694487] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:42.304 [2024-07-11 18:33:28.694769] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:42.304 [2024-07-11 18:33:28.694804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.694817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:42.304 [2024-07-11 18:33:28.694828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:26:42.304 [2024-07-11 18:33:28.694837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.696302] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:42.304 [2024-07-11 18:33:28.698491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.698560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:42.304 [2024-07-11 18:33:28.698575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.191 ms 00:26:42.304 [2024-07-11 18:33:28.698585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.698650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.698668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:42.304 [2024-07-11 18:33:28.698679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:42.304 [2024-07-11 18:33:28.698688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.703053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.703130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:42.304 [2024-07-11 18:33:28.703145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.246 ms 00:26:42.304 [2024-07-11 18:33:28.703171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.703227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.703245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:42.304 [2024-07-11 18:33:28.703256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:42.304 [2024-07-11 18:33:28.703270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.703370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.703406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:42.304 [2024-07-11 18:33:28.703419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:42.304 [2024-07-11 18:33:28.703429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.703466] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:42.304 [2024-07-11 18:33:28.704770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.704822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:42.304 [2024-07-11 18:33:28.704838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.312 ms 00:26:42.304 [2024-07-11 18:33:28.704865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.704906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.304 [2024-07-11 18:33:28.704923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:42.304 [2024-07-11 18:33:28.704934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:42.304 [2024-07-11 18:33:28.704943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.304 [2024-07-11 18:33:28.704970] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:42.304 [2024-07-11 18:33:28.705020] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:42.305 [2024-07-11 18:33:28.705059] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:42.305 [2024-07-11 18:33:28.705080] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:42.305 [2024-07-11 18:33:28.705210] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:42.305 [2024-07-11 18:33:28.705228] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:42.305 [2024-07-11 18:33:28.705241] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:42.305 [2024-07-11 18:33:28.705254] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705278] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705293] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:42.305 [2024-07-11 18:33:28.705309] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:42.305 [2024-07-11 18:33:28.705319] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:42.305 [2024-07-11 18:33:28.705331] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:42.305 [2024-07-11 18:33:28.705342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.305 [2024-07-11 18:33:28.705352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:42.305 [2024-07-11 18:33:28.705362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:26:42.305 [2024-07-11 18:33:28.705371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.305 [2024-07-11 18:33:28.705452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.305 [2024-07-11 18:33:28.705465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:42.305 [2024-07-11 18:33:28.705476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:26:42.305 [2024-07-11 18:33:28.705485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.305 [2024-07-11 18:33:28.705596] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:42.305 [2024-07-11 18:33:28.705631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:42.305 [2024-07-11 18:33:28.705653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:42.305 [2024-07-11 18:33:28.705687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:42.305 [2024-07-11 18:33:28.705706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:42.305 [2024-07-11 18:33:28.705715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:42.305 [2024-07-11 18:33:28.705724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:42.305 [2024-07-11 18:33:28.705742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:42.305 [2024-07-11 18:33:28.705751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:42.305 [2024-07-11 18:33:28.705769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:42.305 [2024-07-11 18:33:28.705777] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:42.305 [2024-07-11 18:33:28.705795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:42.305 [2024-07-11 18:33:28.705803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:42.305 [2024-07-11 18:33:28.705825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:42.305 [2024-07-11 18:33:28.705834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:42.305 [2024-07-11 18:33:28.705852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:42.305 [2024-07-11 18:33:28.705861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:42.305 [2024-07-11 18:33:28.705878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:42.305 [2024-07-11 18:33:28.705886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:42.305 [2024-07-11 18:33:28.705904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:42.305 [2024-07-11 18:33:28.705912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:42.305 [2024-07-11 18:33:28.705930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:42.305 [2024-07-11 18:33:28.705939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:42.305 [2024-07-11 18:33:28.705957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:42.305 [2024-07-11 18:33:28.705968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705977] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:42.305 [2024-07-11 18:33:28.705987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:42.305 [2024-07-11 18:33:28.705996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.706005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:42.305 [2024-07-11 18:33:28.706014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:42.305 [2024-07-11 18:33:28.706022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.706031] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:42.305 [2024-07-11 18:33:28.706041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:42.305 [2024-07-11 18:33:28.706050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:42.305 [2024-07-11 18:33:28.706060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:42.305 [2024-07-11 18:33:28.706069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:42.305 [2024-07-11 18:33:28.706099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:42.305 [2024-07-11 18:33:28.706128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:42.305 [2024-07-11 18:33:28.706138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:42.305 [2024-07-11 18:33:28.706147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:42.305 [2024-07-11 18:33:28.706163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:42.305 [2024-07-11 18:33:28.706174] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:42.305 [2024-07-11 18:33:28.706186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:42.305 [2024-07-11 18:33:28.706211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:42.305 [2024-07-11 18:33:28.706241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:42.305 [2024-07-11 18:33:28.706251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:42.305 [2024-07-11 18:33:28.706260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:42.305 [2024-07-11 18:33:28.706270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:42.305 [2024-07-11 18:33:28.706342] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:42.305 [2024-07-11 18:33:28.706354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:42.305 [2024-07-11 18:33:28.706376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:42.305 [2024-07-11 18:33:28.706387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:42.305 [2024-07-11 18:33:28.706397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:42.305 [2024-07-11 18:33:28.706408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:42.305 [2024-07-11 18:33:28.706418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:42.305 [2024-07-11 18:33:28.706428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.868 ms 00:26:42.305 [2024-07-11 18:33:28.706438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:42.305 [2024-07-11 18:33:28.706508] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:42.305 [2024-07-11 18:33:28.706524] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:44.837 [2024-07-11 18:33:30.972182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.972248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:44.837 [2024-07-11 18:33:30.972295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2265.687 ms 00:26:44.837 [2024-07-11 18:33:30.972306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.978802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.978861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:44.837 [2024-07-11 18:33:30.978893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.412 ms 00:26:44.837 [2024-07-11 18:33:30.978903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.978966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.978980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:44.837 [2024-07-11 18:33:30.979007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:44.837 [2024-07-11 18:33:30.979016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.986477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.986516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:44.837 [2024-07-11 18:33:30.986546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.395 ms 00:26:44.837 [2024-07-11 18:33:30.986567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.986612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.986632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:44.837 [2024-07-11 18:33:30.986643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:44.837 [2024-07-11 18:33:30.986652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.987014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.987042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:44.837 [2024-07-11 18:33:30.987055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:26:44.837 [2024-07-11 18:33:30.987065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.987143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.987161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:44.837 [2024-07-11 18:33:30.987177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:44.837 [2024-07-11 18:33:30.987186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.992620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.992682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:44.837 [2024-07-11 18:33:30.992696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.406 ms 00:26:44.837 [2024-07-11 18:33:30.992706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.995025] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:44.837 [2024-07-11 18:33:30.995122] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:44.837 [2024-07-11 18:33:30.995146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.837 [2024-07-11 18:33:30.995157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:44.837 [2024-07-11 18:33:30.995168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.339 ms 00:26:44.837 [2024-07-11 18:33:30.995192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.837 [2024-07-11 18:33:30.998953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:30.999005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:44.838 [2024-07-11 18:33:30.999020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.713 ms 00:26:44.838 [2024-07-11 18:33:30.999031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.000606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.000658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:44.838 [2024-07-11 18:33:31.000672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.540 ms 00:26:44.838 [2024-07-11 18:33:31.000682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.002142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.002224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:44.838 [2024-07-11 18:33:31.002255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.419 ms 00:26:44.838 [2024-07-11 18:33:31.002265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.002651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.002681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:44.838 [2024-07-11 18:33:31.002698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:26:44.838 [2024-07-11 18:33:31.002708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.025107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.025191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:44.838 [2024-07-11 18:33:31.025226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.372 ms 00:26:44.838 [2024-07-11 18:33:31.025238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.031977] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:44.838 [2024-07-11 18:33:31.032580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.032609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:44.838 [2024-07-11 18:33:31.032622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.276 ms 00:26:44.838 [2024-07-11 18:33:31.032632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.032720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.032738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:44.838 [2024-07-11 18:33:31.032750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:44.838 [2024-07-11 18:33:31.032759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.032824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.032841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:44.838 [2024-07-11 18:33:31.032852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:44.838 [2024-07-11 18:33:31.032862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.032888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.032901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:44.838 [2024-07-11 18:33:31.032924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:44.838 [2024-07-11 18:33:31.032933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.032974] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:44.838 [2024-07-11 18:33:31.032990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.033004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:44.838 [2024-07-11 18:33:31.033015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:44.838 [2024-07-11 18:33:31.033025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.036174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.036212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:44.838 [2024-07-11 18:33:31.036249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.124 ms 00:26:44.838 [2024-07-11 18:33:31.036263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.036350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.838 [2024-07-11 18:33:31.036373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:44.838 [2024-07-11 18:33:31.036396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:44.838 [2024-07-11 18:33:31.036409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.838 [2024-07-11 18:33:31.037797] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2343.035 ms, result 0 00:26:44.838 [2024-07-11 18:33:31.053289] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.838 [2024-07-11 18:33:31.069299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:44.838 [2024-07-11 18:33:31.077396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:44.838 18:33:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.838 18:33:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:44.838 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:44.838 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:44.838 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:45.096 [2024-07-11 18:33:31.357506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.096 [2024-07-11 18:33:31.357548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:45.096 [2024-07-11 18:33:31.357592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:45.096 [2024-07-11 18:33:31.357603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.096 [2024-07-11 18:33:31.357633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.096 [2024-07-11 18:33:31.357649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:45.096 [2024-07-11 18:33:31.357659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:45.096 [2024-07-11 18:33:31.357669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.096 [2024-07-11 18:33:31.357698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:45.096 [2024-07-11 18:33:31.357710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:45.096 [2024-07-11 18:33:31.357720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:45.096 [2024-07-11 18:33:31.357729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:45.096 [2024-07-11 18:33:31.357789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.281 ms, result 0 00:26:45.096 true 00:26:45.096 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:45.355 { 00:26:45.355 "name": "ftl", 00:26:45.355 "properties": [ 00:26:45.355 { 00:26:45.355 "name": "superblock_version", 00:26:45.355 "value": 5, 00:26:45.355 "read-only": true 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "name": "base_device", 00:26:45.355 "bands": [ 00:26:45.355 { 00:26:45.355 "id": 0, 00:26:45.355 "state": "CLOSED", 00:26:45.355 "validity": 1.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 1, 00:26:45.355 "state": "CLOSED", 00:26:45.355 "validity": 1.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 2, 00:26:45.355 "state": "CLOSED", 00:26:45.355 "validity": 0.007843137254901933 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 3, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 4, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 5, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 6, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 7, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 8, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 9, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 10, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 11, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 12, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 13, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 14, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 15, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 16, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 17, 00:26:45.355 "state": "FREE", 00:26:45.355 "validity": 0.0 00:26:45.355 } 00:26:45.355 ], 00:26:45.355 "read-only": true 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "name": "cache_device", 00:26:45.355 "type": "bdev", 00:26:45.355 "chunks": [ 00:26:45.355 { 00:26:45.355 "id": 0, 00:26:45.355 "state": "INACTIVE", 00:26:45.355 "utilization": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 1, 00:26:45.355 "state": "OPEN", 00:26:45.355 "utilization": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 2, 00:26:45.355 "state": "OPEN", 00:26:45.355 "utilization": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 3, 00:26:45.355 "state": "FREE", 00:26:45.355 "utilization": 0.0 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "id": 4, 00:26:45.355 "state": "FREE", 00:26:45.355 "utilization": 0.0 00:26:45.355 } 00:26:45.355 ], 00:26:45.355 "read-only": true 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "name": "verbose_mode", 00:26:45.355 "value": true, 00:26:45.355 "unit": "", 00:26:45.355 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:45.355 }, 00:26:45.355 { 00:26:45.355 "name": "prep_upgrade_on_shutdown", 00:26:45.355 "value": false, 00:26:45.355 "unit": "", 00:26:45.355 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:45.355 } 00:26:45.355 ] 00:26:45.355 } 00:26:45.355 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:45.355 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:45.355 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:45.614 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:45.614 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:45.614 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:45.614 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:45.614 18:33:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:45.873 Validate MD5 checksum, iteration 1 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:45.873 18:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:45.873 [2024-07-11 18:33:32.174113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:45.873 [2024-07-11 18:33:32.174303] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96267 ] 00:26:46.131 [2024-07-11 18:33:32.325455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.131 [2024-07-11 18:33:32.367576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.018  Copying: 492/1024 [MB] (492 MBps) Copying: 969/1024 [MB] (477 MBps) Copying: 1024/1024 [MB] (average 482 MBps) 00:26:49.018 00:26:49.018 18:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:49.018 18:33:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:50.920 Validate MD5 checksum, iteration 2 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=024dc100ecdfcb76b8b1ceac7710cec8 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 024dc100ecdfcb76b8b1ceac7710cec8 != \0\2\4\d\c\1\0\0\e\c\d\f\c\b\7\6\b\8\b\1\c\e\a\c\7\7\1\0\c\e\c\8 ]] 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:50.920 18:33:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:50.920 [2024-07-11 18:33:37.045300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:50.920 [2024-07-11 18:33:37.045510] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96316 ] 00:26:50.920 [2024-07-11 18:33:37.190072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.920 [2024-07-11 18:33:37.223293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.798  Copying: 495/1024 [MB] (495 MBps) Copying: 984/1024 [MB] (489 MBps) Copying: 1024/1024 [MB] (average 491 MBps) 00:26:55.798 00:26:55.798 18:33:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:55.798 18:33:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5608e81063ce2981fb41dcd812ed704a 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5608e81063ce2981fb41dcd812ed704a != \5\6\0\8\e\8\1\0\6\3\c\e\2\9\8\1\f\b\4\1\d\c\d\8\1\2\e\d\7\0\4\a ]] 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 96209 ]] 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 96209 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96391 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96391 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 96391 ']' 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.703 18:33:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:57.703 [2024-07-11 18:33:43.767605] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:57.703 [2024-07-11 18:33:43.767824] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96391 ] 00:26:57.703 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 96209 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:57.703 [2024-07-11 18:33:43.914378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.703 [2024-07-11 18:33:43.946571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.963 [2024-07-11 18:33:44.185160] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:57.963 [2024-07-11 18:33:44.185250] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:57.963 [2024-07-11 18:33:44.329335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.329378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:57.963 [2024-07-11 18:33:44.329416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:57.963 [2024-07-11 18:33:44.329427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.329496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.329528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:57.963 [2024-07-11 18:33:44.329555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:57.963 [2024-07-11 18:33:44.329564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.329601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:57.963 [2024-07-11 18:33:44.329877] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:57.963 [2024-07-11 18:33:44.329912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.329924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:57.963 [2024-07-11 18:33:44.329935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.324 ms 00:26:57.963 [2024-07-11 18:33:44.329944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.330456] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:57.963 [2024-07-11 18:33:44.333710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.333755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:57.963 [2024-07-11 18:33:44.333785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.256 ms 00:26:57.963 [2024-07-11 18:33:44.333794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.334684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.334736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:57.963 [2024-07-11 18:33:44.334753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:57.963 [2024-07-11 18:33:44.334772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.335232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.335264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:57.963 [2024-07-11 18:33:44.335277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:26:57.963 [2024-07-11 18:33:44.335296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.335342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.335365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:57.963 [2024-07-11 18:33:44.335376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:57.963 [2024-07-11 18:33:44.335390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.335453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.335468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:57.963 [2024-07-11 18:33:44.335479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:57.963 [2024-07-11 18:33:44.335499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.335531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:57.963 [2024-07-11 18:33:44.336501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.336557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:57.963 [2024-07-11 18:33:44.336574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.973 ms 00:26:57.963 [2024-07-11 18:33:44.336583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.336616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.336629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:57.963 [2024-07-11 18:33:44.336639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:57.963 [2024-07-11 18:33:44.336648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.336714] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:57.963 [2024-07-11 18:33:44.336748] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:57.963 [2024-07-11 18:33:44.336783] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:57.963 [2024-07-11 18:33:44.336804] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:57.963 [2024-07-11 18:33:44.336911] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:57.963 [2024-07-11 18:33:44.336946] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:57.963 [2024-07-11 18:33:44.336959] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:57.963 [2024-07-11 18:33:44.336976] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:57.963 [2024-07-11 18:33:44.336989] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:57.963 [2024-07-11 18:33:44.337000] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:57.963 [2024-07-11 18:33:44.337018] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:57.963 [2024-07-11 18:33:44.337034] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:57.963 [2024-07-11 18:33:44.337046] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:57.963 [2024-07-11 18:33:44.337057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.337067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:57.963 [2024-07-11 18:33:44.337091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:26:57.963 [2024-07-11 18:33:44.337104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.337208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.963 [2024-07-11 18:33:44.337231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:57.963 [2024-07-11 18:33:44.337245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:26:57.963 [2024-07-11 18:33:44.337255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.963 [2024-07-11 18:33:44.337364] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:57.963 [2024-07-11 18:33:44.337383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:57.963 [2024-07-11 18:33:44.337397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:57.963 [2024-07-11 18:33:44.337408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.963 [2024-07-11 18:33:44.337418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:57.963 [2024-07-11 18:33:44.337426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:57.963 [2024-07-11 18:33:44.337436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:57.963 [2024-07-11 18:33:44.337444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:57.963 [2024-07-11 18:33:44.337453] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:57.963 [2024-07-11 18:33:44.337462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.963 [2024-07-11 18:33:44.337471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:57.963 [2024-07-11 18:33:44.337479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:57.963 [2024-07-11 18:33:44.337488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.963 [2024-07-11 18:33:44.337496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:57.963 [2024-07-11 18:33:44.337505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:57.963 [2024-07-11 18:33:44.337513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.963 [2024-07-11 18:33:44.337522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:57.963 [2024-07-11 18:33:44.337531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:57.963 [2024-07-11 18:33:44.337541] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.963 [2024-07-11 18:33:44.337551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:57.963 [2024-07-11 18:33:44.337560] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:57.964 [2024-07-11 18:33:44.337568] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.964 [2024-07-11 18:33:44.337577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:57.964 [2024-07-11 18:33:44.337585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:57.964 [2024-07-11 18:33:44.337594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.964 [2024-07-11 18:33:44.337602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:57.964 [2024-07-11 18:33:44.337611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:57.964 [2024-07-11 18:33:44.337619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.964 [2024-07-11 18:33:44.337628] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:57.964 [2024-07-11 18:33:44.337637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:57.964 [2024-07-11 18:33:44.337646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:57.964 [2024-07-11 18:33:44.337655] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:57.964 [2024-07-11 18:33:44.337664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:57.964 [2024-07-11 18:33:44.337672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.964 [2024-07-11 18:33:44.337683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:57.964 [2024-07-11 18:33:44.337692] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:57.964 [2024-07-11 18:33:44.337701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.964 [2024-07-11 18:33:44.337710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:57.964 [2024-07-11 18:33:44.337718] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:57.964 [2024-07-11 18:33:44.337726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.964 [2024-07-11 18:33:44.337750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:57.964 [2024-07-11 18:33:44.337759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:57.964 [2024-07-11 18:33:44.337768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.964 [2024-07-11 18:33:44.337776] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:57.964 [2024-07-11 18:33:44.337786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:57.964 [2024-07-11 18:33:44.337795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:57.964 [2024-07-11 18:33:44.337804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:57.964 [2024-07-11 18:33:44.337814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:57.964 [2024-07-11 18:33:44.337823] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:57.964 [2024-07-11 18:33:44.337832] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:57.964 [2024-07-11 18:33:44.337846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:57.964 [2024-07-11 18:33:44.337855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:57.964 [2024-07-11 18:33:44.337864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:57.964 [2024-07-11 18:33:44.337874] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:57.964 [2024-07-11 18:33:44.337886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.337897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:57.964 [2024-07-11 18:33:44.337908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.337918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.337927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:57.964 [2024-07-11 18:33:44.337937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:57.964 [2024-07-11 18:33:44.337947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:57.964 [2024-07-11 18:33:44.337957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:57.964 [2024-07-11 18:33:44.337968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.337977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.337987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.337997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.338009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.338020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.338030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:57.964 [2024-07-11 18:33:44.338040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:57.964 [2024-07-11 18:33:44.338053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.338078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:57.964 [2024-07-11 18:33:44.338088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:57.964 [2024-07-11 18:33:44.338097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:57.964 [2024-07-11 18:33:44.338136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:57.964 [2024-07-11 18:33:44.338150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.338160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:57.964 [2024-07-11 18:33:44.338180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:26:57.964 [2024-07-11 18:33:44.338190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.344460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.344515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:57.964 [2024-07-11 18:33:44.344541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.201 ms 00:26:57.964 [2024-07-11 18:33:44.344550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.344604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.344617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:57.964 [2024-07-11 18:33:44.344627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:57.964 [2024-07-11 18:33:44.344636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.352079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.352180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:57.964 [2024-07-11 18:33:44.352197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.367 ms 00:26:57.964 [2024-07-11 18:33:44.352215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.352312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.352327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:57.964 [2024-07-11 18:33:44.352339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:57.964 [2024-07-11 18:33:44.352348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.352463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.352481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:57.964 [2024-07-11 18:33:44.352493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:26:57.964 [2024-07-11 18:33:44.352503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.352550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.352564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:57.964 [2024-07-11 18:33:44.352574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:57.964 [2024-07-11 18:33:44.352586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.357890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.357946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:57.964 [2024-07-11 18:33:44.357960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.276 ms 00:26:57.964 [2024-07-11 18:33:44.357969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.358136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.358164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:57.964 [2024-07-11 18:33:44.358176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:57.964 [2024-07-11 18:33:44.358196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.369592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.369632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:57.964 [2024-07-11 18:33:44.369670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.364 ms 00:26:57.964 [2024-07-11 18:33:44.369689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:57.964 [2024-07-11 18:33:44.370819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:57.964 [2024-07-11 18:33:44.370876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:57.964 [2024-07-11 18:33:44.370889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.242 ms 00:26:57.964 [2024-07-11 18:33:44.370898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.223 [2024-07-11 18:33:44.386334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.223 [2024-07-11 18:33:44.386421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:58.223 [2024-07-11 18:33:44.386455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.391 ms 00:26:58.223 [2024-07-11 18:33:44.386466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.223 [2024-07-11 18:33:44.386691] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:58.223 [2024-07-11 18:33:44.386812] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:58.223 [2024-07-11 18:33:44.386920] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:58.223 [2024-07-11 18:33:44.387016] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:58.223 [2024-07-11 18:33:44.387030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.223 [2024-07-11 18:33:44.387041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:58.223 [2024-07-11 18:33:44.387052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:26:58.223 [2024-07-11 18:33:44.387074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.223 [2024-07-11 18:33:44.387158] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:58.223 [2024-07-11 18:33:44.387188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.223 [2024-07-11 18:33:44.387198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:58.223 [2024-07-11 18:33:44.387209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:58.223 [2024-07-11 18:33:44.387218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.224 [2024-07-11 18:33:44.389798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.224 [2024-07-11 18:33:44.389855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:58.224 [2024-07-11 18:33:44.389873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.551 ms 00:26:58.224 [2024-07-11 18:33:44.389892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.224 [2024-07-11 18:33:44.390640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.224 [2024-07-11 18:33:44.390692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:58.224 [2024-07-11 18:33:44.390706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:58.224 [2024-07-11 18:33:44.390717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.224 [2024-07-11 18:33:44.390923] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:58.791 [2024-07-11 18:33:44.954467] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:58.791 [2024-07-11 18:33:44.954768] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:59.360 [2024-07-11 18:33:45.525378] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:59.360 [2024-07-11 18:33:45.525543] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:59.360 [2024-07-11 18:33:45.525565] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:59.360 [2024-07-11 18:33:45.525581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.525593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:59.360 [2024-07-11 18:33:45.525623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1134.812 ms 00:26:59.360 [2024-07-11 18:33:45.525647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.525703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.525716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:59.360 [2024-07-11 18:33:45.525737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:59.360 [2024-07-11 18:33:45.525746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.532665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:59.360 [2024-07-11 18:33:45.532795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.532812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:59.360 [2024-07-11 18:33:45.532823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.029 ms 00:26:59.360 [2024-07-11 18:33:45.532831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.533488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.533518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:59.360 [2024-07-11 18:33:45.533531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:26:59.360 [2024-07-11 18:33:45.533540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.535668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.535742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:59.360 [2024-07-11 18:33:45.535754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.104 ms 00:26:59.360 [2024-07-11 18:33:45.535763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.535828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.535843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:59.360 [2024-07-11 18:33:45.535853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:59.360 [2024-07-11 18:33:45.535862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.536004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.536021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:59.360 [2024-07-11 18:33:45.536043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:26:59.360 [2024-07-11 18:33:45.536052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.536078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.536096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:59.360 [2024-07-11 18:33:45.536106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:59.360 [2024-07-11 18:33:45.536115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.536173] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:59.360 [2024-07-11 18:33:45.536190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.536200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:59.360 [2024-07-11 18:33:45.536210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:59.360 [2024-07-11 18:33:45.536219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.536272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.360 [2024-07-11 18:33:45.536285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:59.360 [2024-07-11 18:33:45.536299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:59.360 [2024-07-11 18:33:45.536308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.360 [2024-07-11 18:33:45.537568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1207.683 ms, result 0 00:26:59.360 [2024-07-11 18:33:45.553151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.360 [2024-07-11 18:33:45.569149] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:59.360 [2024-07-11 18:33:45.577260] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:59.360 Validate MD5 checksum, iteration 1 00:26:59.360 18:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.360 18:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:59.360 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:59.360 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:59.361 18:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:59.361 [2024-07-11 18:33:45.703273] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:59.361 [2024-07-11 18:33:45.703504] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96414 ] 00:26:59.619 [2024-07-11 18:33:45.854192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.619 [2024-07-11 18:33:45.896065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.094  Copying: 494/1024 [MB] (494 MBps) Copying: 964/1024 [MB] (470 MBps) Copying: 1024/1024 [MB] (average 476 MBps) 00:27:04.094 00:27:04.094 18:33:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:04.094 18:33:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:05.998 Validate MD5 checksum, iteration 2 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=024dc100ecdfcb76b8b1ceac7710cec8 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 024dc100ecdfcb76b8b1ceac7710cec8 != \0\2\4\d\c\1\0\0\e\c\d\f\c\b\7\6\b\8\b\1\c\e\a\c\7\7\1\0\c\e\c\8 ]] 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:05.998 18:33:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:05.998 [2024-07-11 18:33:52.358368] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:05.998 [2024-07-11 18:33:52.358557] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96481 ] 00:27:06.258 [2024-07-11 18:33:52.508386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.258 [2024-07-11 18:33:52.550622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.200  Copying: 500/1024 [MB] (500 MBps) Copying: 955/1024 [MB] (455 MBps) Copying: 1024/1024 [MB] (average 471 MBps) 00:27:10.200 00:27:10.200 18:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:10.200 18:33:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5608e81063ce2981fb41dcd812ed704a 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5608e81063ce2981fb41dcd812ed704a != \5\6\0\8\e\8\1\0\6\3\c\e\2\9\8\1\f\b\4\1\d\c\d\8\1\2\e\d\7\0\4\a ]] 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96391 ]] 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96391 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 96391 ']' 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 96391 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96391 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:12.102 killing process with pid 96391 00:27:12.102 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96391' 00:27:12.103 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 96391 00:27:12.103 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 96391 00:27:12.103 [2024-07-11 18:33:58.395857] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:12.103 [2024-07-11 18:33:58.400548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.400592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:12.103 [2024-07-11 18:33:58.400626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:12.103 [2024-07-11 18:33:58.400636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.400663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:12.103 [2024-07-11 18:33:58.401209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.401238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:12.103 [2024-07-11 18:33:58.401266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:27:12.103 [2024-07-11 18:33:58.401276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.401516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.401574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:12.103 [2024-07-11 18:33:58.401586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:27:12.103 [2024-07-11 18:33:58.401600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.402734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.402771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:12.103 [2024-07-11 18:33:58.402801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.104 ms 00:27:12.103 [2024-07-11 18:33:58.402810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.404211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.404298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:12.103 [2024-07-11 18:33:58.404314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.359 ms 00:27:12.103 [2024-07-11 18:33:58.404325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.405960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.406045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:12.103 [2024-07-11 18:33:58.406076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.567 ms 00:27:12.103 [2024-07-11 18:33:58.406110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.407378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.407474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:12.103 [2024-07-11 18:33:58.407506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.191 ms 00:27:12.103 [2024-07-11 18:33:58.407525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.407611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.407645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:12.103 [2024-07-11 18:33:58.407657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:12.103 [2024-07-11 18:33:58.407667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.409211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.409248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:12.103 [2024-07-11 18:33:58.409277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.522 ms 00:27:12.103 [2024-07-11 18:33:58.409287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.410678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.410726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:12.103 [2024-07-11 18:33:58.410755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.350 ms 00:27:12.103 [2024-07-11 18:33:58.410764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.412162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.412238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:12.103 [2024-07-11 18:33:58.412268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.348 ms 00:27:12.103 [2024-07-11 18:33:58.412278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.413563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.413598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:12.103 [2024-07-11 18:33:58.413627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.216 ms 00:27:12.103 [2024-07-11 18:33:58.413636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.413688] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:12.103 [2024-07-11 18:33:58.413709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:12.103 [2024-07-11 18:33:58.413728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:12.103 [2024-07-11 18:33:58.413739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:12.103 [2024-07-11 18:33:58.413749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:12.103 [2024-07-11 18:33:58.413938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:12.103 [2024-07-11 18:33:58.413948] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b61277fd-2124-4816-a711-7d3f4143c0e1 00:27:12.103 [2024-07-11 18:33:58.413958] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:12.103 [2024-07-11 18:33:58.413971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:12.103 [2024-07-11 18:33:58.413980] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:12.103 [2024-07-11 18:33:58.413990] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:12.103 [2024-07-11 18:33:58.414000] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:12.103 [2024-07-11 18:33:58.414010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:12.103 [2024-07-11 18:33:58.414019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:12.103 [2024-07-11 18:33:58.414028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:12.103 [2024-07-11 18:33:58.414037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:12.103 [2024-07-11 18:33:58.414046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.414056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:12.103 [2024-07-11 18:33:58.414083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:27:12.103 [2024-07-11 18:33:58.414093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.415716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.415793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:12.103 [2024-07-11 18:33:58.415816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.584 ms 00:27:12.103 [2024-07-11 18:33:58.415847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.415973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:12.103 [2024-07-11 18:33:58.416001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:12.103 [2024-07-11 18:33:58.416022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:27:12.103 [2024-07-11 18:33:58.416040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.421724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.103 [2024-07-11 18:33:58.421779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:12.103 [2024-07-11 18:33:58.421809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.103 [2024-07-11 18:33:58.421819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.421855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.103 [2024-07-11 18:33:58.421869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:12.103 [2024-07-11 18:33:58.421879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.103 [2024-07-11 18:33:58.421888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.421965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.103 [2024-07-11 18:33:58.422005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:12.103 [2024-07-11 18:33:58.422041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.103 [2024-07-11 18:33:58.422059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.422114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.103 [2024-07-11 18:33:58.422154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:12.103 [2024-07-11 18:33:58.422181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.103 [2024-07-11 18:33:58.422199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.103 [2024-07-11 18:33:58.430634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.103 [2024-07-11 18:33:58.430699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:12.103 [2024-07-11 18:33:58.430729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.103 [2024-07-11 18:33:58.430739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.437277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.104 [2024-07-11 18:33:58.437350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:12.104 [2024-07-11 18:33:58.437381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.104 [2024-07-11 18:33:58.437391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.437503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.104 [2024-07-11 18:33:58.437526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:12.104 [2024-07-11 18:33:58.437536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.104 [2024-07-11 18:33:58.437546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.437585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.104 [2024-07-11 18:33:58.437598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:12.104 [2024-07-11 18:33:58.437624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.104 [2024-07-11 18:33:58.437634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.437734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.104 [2024-07-11 18:33:58.437751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:12.104 [2024-07-11 18:33:58.437767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.104 [2024-07-11 18:33:58.437776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.437822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.104 [2024-07-11 18:33:58.437837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:12.104 [2024-07-11 18:33:58.437848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.104 [2024-07-11 18:33:58.437867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.437934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.104 [2024-07-11 18:33:58.437954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:12.104 [2024-07-11 18:33:58.437977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.104 [2024-07-11 18:33:58.437987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.438037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:12.104 [2024-07-11 18:33:58.438053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:12.104 [2024-07-11 18:33:58.438063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:12.104 [2024-07-11 18:33:58.438072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:12.104 [2024-07-11 18:33:58.438274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 37.686 ms, result 0 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:12.363 Remove shared memory files 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid96209 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:12.363 00:27:12.363 real 1m14.377s 00:27:12.363 user 1m41.555s 00:27:12.363 sys 0m20.736s 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.363 ************************************ 00:27:12.363 END TEST ftl_upgrade_shutdown 00:27:12.363 ************************************ 00:27:12.363 18:33:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.363 18:33:58 ftl -- common/autotest_common.sh@1142 -- # return 0 00:27:12.363 18:33:58 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:12.363 18:33:58 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:12.363 18:33:58 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:27:12.363 18:33:58 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.363 18:33:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:12.363 ************************************ 00:27:12.363 START TEST ftl_restore_fast 00:27:12.363 ************************************ 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:12.363 * Looking for test storage... 00:27:12.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:12.363 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.LyrscPFjkx 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96619 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96619 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 96619 ']' 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.623 18:33:58 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:12.623 [2024-07-11 18:33:58.892931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:12.623 [2024-07-11 18:33:58.893165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96619 ] 00:27:12.881 [2024-07-11 18:33:59.040330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.881 [2024-07-11 18:33:59.075431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:13.449 18:33:59 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:14.017 { 00:27:14.017 "name": "nvme0n1", 00:27:14.017 "aliases": [ 00:27:14.017 "ce0637c9-3ee4-4671-97b2-1b2e297d5af1" 00:27:14.017 ], 00:27:14.017 "product_name": "NVMe disk", 00:27:14.017 "block_size": 4096, 00:27:14.017 "num_blocks": 1310720, 00:27:14.017 "uuid": "ce0637c9-3ee4-4671-97b2-1b2e297d5af1", 00:27:14.017 "assigned_rate_limits": { 00:27:14.017 "rw_ios_per_sec": 0, 00:27:14.017 "rw_mbytes_per_sec": 0, 00:27:14.017 "r_mbytes_per_sec": 0, 00:27:14.017 "w_mbytes_per_sec": 0 00:27:14.017 }, 00:27:14.017 "claimed": true, 00:27:14.017 "claim_type": "read_many_write_one", 00:27:14.017 "zoned": false, 00:27:14.017 "supported_io_types": { 00:27:14.017 "read": true, 00:27:14.017 "write": true, 00:27:14.017 "unmap": true, 00:27:14.017 "flush": true, 00:27:14.017 "reset": true, 00:27:14.017 "nvme_admin": true, 00:27:14.017 "nvme_io": true, 00:27:14.017 "nvme_io_md": false, 00:27:14.017 "write_zeroes": true, 00:27:14.017 "zcopy": false, 00:27:14.017 "get_zone_info": false, 00:27:14.017 "zone_management": false, 00:27:14.017 "zone_append": false, 00:27:14.017 "compare": true, 00:27:14.017 "compare_and_write": false, 00:27:14.017 "abort": true, 00:27:14.017 "seek_hole": false, 00:27:14.017 "seek_data": false, 00:27:14.017 "copy": true, 00:27:14.017 "nvme_iov_md": false 00:27:14.017 }, 00:27:14.017 "driver_specific": { 00:27:14.017 "nvme": [ 00:27:14.017 { 00:27:14.017 "pci_address": "0000:00:11.0", 00:27:14.017 "trid": { 00:27:14.017 "trtype": "PCIe", 00:27:14.017 "traddr": "0000:00:11.0" 00:27:14.017 }, 00:27:14.017 "ctrlr_data": { 00:27:14.017 "cntlid": 0, 00:27:14.017 "vendor_id": "0x1b36", 00:27:14.017 "model_number": "QEMU NVMe Ctrl", 00:27:14.017 "serial_number": "12341", 00:27:14.017 "firmware_revision": "8.0.0", 00:27:14.017 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:14.017 "oacs": { 00:27:14.017 "security": 0, 00:27:14.017 "format": 1, 00:27:14.017 "firmware": 0, 00:27:14.017 "ns_manage": 1 00:27:14.017 }, 00:27:14.017 "multi_ctrlr": false, 00:27:14.017 "ana_reporting": false 00:27:14.017 }, 00:27:14.017 "vs": { 00:27:14.017 "nvme_version": "1.4" 00:27:14.017 }, 00:27:14.017 "ns_data": { 00:27:14.017 "id": 1, 00:27:14.017 "can_share": false 00:27:14.017 } 00:27:14.017 } 00:27:14.017 ], 00:27:14.017 "mp_policy": "active_passive" 00:27:14.017 } 00:27:14.017 } 00:27:14.017 ]' 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:14.017 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:14.276 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=3dc550b8-bb8b-4e86-86bf-fa42d1c26780 00:27:14.535 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:14.535 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3dc550b8-bb8b-4e86-86bf-fa42d1c26780 00:27:14.796 18:34:00 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:14.796 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ce3c104c-dab7-4a06-b6b9-4698d4ae0d19 00:27:14.796 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ce3c104c-dab7-4a06-b6b9-4698d4ae0d19 00:27:15.055 18:34:01 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:15.056 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:15.315 { 00:27:15.315 "name": "3c492bc4-f6d7-4cb1-ad24-084f641af751", 00:27:15.315 "aliases": [ 00:27:15.315 "lvs/nvme0n1p0" 00:27:15.315 ], 00:27:15.315 "product_name": "Logical Volume", 00:27:15.315 "block_size": 4096, 00:27:15.315 "num_blocks": 26476544, 00:27:15.315 "uuid": "3c492bc4-f6d7-4cb1-ad24-084f641af751", 00:27:15.315 "assigned_rate_limits": { 00:27:15.315 "rw_ios_per_sec": 0, 00:27:15.315 "rw_mbytes_per_sec": 0, 00:27:15.315 "r_mbytes_per_sec": 0, 00:27:15.315 "w_mbytes_per_sec": 0 00:27:15.315 }, 00:27:15.315 "claimed": false, 00:27:15.315 "zoned": false, 00:27:15.315 "supported_io_types": { 00:27:15.315 "read": true, 00:27:15.315 "write": true, 00:27:15.315 "unmap": true, 00:27:15.315 "flush": false, 00:27:15.315 "reset": true, 00:27:15.315 "nvme_admin": false, 00:27:15.315 "nvme_io": false, 00:27:15.315 "nvme_io_md": false, 00:27:15.315 "write_zeroes": true, 00:27:15.315 "zcopy": false, 00:27:15.315 "get_zone_info": false, 00:27:15.315 "zone_management": false, 00:27:15.315 "zone_append": false, 00:27:15.315 "compare": false, 00:27:15.315 "compare_and_write": false, 00:27:15.315 "abort": false, 00:27:15.315 "seek_hole": true, 00:27:15.315 "seek_data": true, 00:27:15.315 "copy": false, 00:27:15.315 "nvme_iov_md": false 00:27:15.315 }, 00:27:15.315 "driver_specific": { 00:27:15.315 "lvol": { 00:27:15.315 "lvol_store_uuid": "ce3c104c-dab7-4a06-b6b9-4698d4ae0d19", 00:27:15.315 "base_bdev": "nvme0n1", 00:27:15.315 "thin_provision": true, 00:27:15.315 "num_allocated_clusters": 0, 00:27:15.315 "snapshot": false, 00:27:15.315 "clone": false, 00:27:15.315 "esnap_clone": false 00:27:15.315 } 00:27:15.315 } 00:27:15.315 } 00:27:15.315 ]' 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:15.315 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:15.573 18:34:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:15.832 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:15.832 { 00:27:15.832 "name": "3c492bc4-f6d7-4cb1-ad24-084f641af751", 00:27:15.832 "aliases": [ 00:27:15.832 "lvs/nvme0n1p0" 00:27:15.832 ], 00:27:15.832 "product_name": "Logical Volume", 00:27:15.832 "block_size": 4096, 00:27:15.832 "num_blocks": 26476544, 00:27:15.832 "uuid": "3c492bc4-f6d7-4cb1-ad24-084f641af751", 00:27:15.832 "assigned_rate_limits": { 00:27:15.832 "rw_ios_per_sec": 0, 00:27:15.832 "rw_mbytes_per_sec": 0, 00:27:15.832 "r_mbytes_per_sec": 0, 00:27:15.832 "w_mbytes_per_sec": 0 00:27:15.832 }, 00:27:15.832 "claimed": false, 00:27:15.832 "zoned": false, 00:27:15.832 "supported_io_types": { 00:27:15.832 "read": true, 00:27:15.832 "write": true, 00:27:15.832 "unmap": true, 00:27:15.832 "flush": false, 00:27:15.832 "reset": true, 00:27:15.832 "nvme_admin": false, 00:27:15.832 "nvme_io": false, 00:27:15.832 "nvme_io_md": false, 00:27:15.832 "write_zeroes": true, 00:27:15.832 "zcopy": false, 00:27:15.832 "get_zone_info": false, 00:27:15.832 "zone_management": false, 00:27:15.832 "zone_append": false, 00:27:15.832 "compare": false, 00:27:15.832 "compare_and_write": false, 00:27:15.832 "abort": false, 00:27:15.832 "seek_hole": true, 00:27:15.832 "seek_data": true, 00:27:15.832 "copy": false, 00:27:15.832 "nvme_iov_md": false 00:27:15.832 }, 00:27:15.832 "driver_specific": { 00:27:15.832 "lvol": { 00:27:15.832 "lvol_store_uuid": "ce3c104c-dab7-4a06-b6b9-4698d4ae0d19", 00:27:15.832 "base_bdev": "nvme0n1", 00:27:15.832 "thin_provision": true, 00:27:15.832 "num_allocated_clusters": 0, 00:27:15.832 "snapshot": false, 00:27:15.832 "clone": false, 00:27:15.832 "esnap_clone": false 00:27:15.832 } 00:27:15.832 } 00:27:15.832 } 00:27:15.832 ]' 00:27:15.832 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:16.090 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:16.090 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:16.090 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:16.090 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:16.090 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:16.090 18:34:02 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:16.090 18:34:02 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:16.348 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:16.348 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:16.348 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:16.348 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:16.348 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:16.348 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:16.348 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c492bc4-f6d7-4cb1-ad24-084f641af751 00:27:16.606 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:16.606 { 00:27:16.606 "name": "3c492bc4-f6d7-4cb1-ad24-084f641af751", 00:27:16.606 "aliases": [ 00:27:16.606 "lvs/nvme0n1p0" 00:27:16.606 ], 00:27:16.606 "product_name": "Logical Volume", 00:27:16.606 "block_size": 4096, 00:27:16.606 "num_blocks": 26476544, 00:27:16.606 "uuid": "3c492bc4-f6d7-4cb1-ad24-084f641af751", 00:27:16.606 "assigned_rate_limits": { 00:27:16.606 "rw_ios_per_sec": 0, 00:27:16.606 "rw_mbytes_per_sec": 0, 00:27:16.606 "r_mbytes_per_sec": 0, 00:27:16.606 "w_mbytes_per_sec": 0 00:27:16.606 }, 00:27:16.606 "claimed": false, 00:27:16.606 "zoned": false, 00:27:16.606 "supported_io_types": { 00:27:16.606 "read": true, 00:27:16.606 "write": true, 00:27:16.606 "unmap": true, 00:27:16.606 "flush": false, 00:27:16.606 "reset": true, 00:27:16.606 "nvme_admin": false, 00:27:16.606 "nvme_io": false, 00:27:16.606 "nvme_io_md": false, 00:27:16.606 "write_zeroes": true, 00:27:16.606 "zcopy": false, 00:27:16.606 "get_zone_info": false, 00:27:16.606 "zone_management": false, 00:27:16.606 "zone_append": false, 00:27:16.606 "compare": false, 00:27:16.606 "compare_and_write": false, 00:27:16.606 "abort": false, 00:27:16.606 "seek_hole": true, 00:27:16.606 "seek_data": true, 00:27:16.606 "copy": false, 00:27:16.606 "nvme_iov_md": false 00:27:16.606 }, 00:27:16.606 "driver_specific": { 00:27:16.606 "lvol": { 00:27:16.606 "lvol_store_uuid": "ce3c104c-dab7-4a06-b6b9-4698d4ae0d19", 00:27:16.607 "base_bdev": "nvme0n1", 00:27:16.607 "thin_provision": true, 00:27:16.607 "num_allocated_clusters": 0, 00:27:16.607 "snapshot": false, 00:27:16.607 "clone": false, 00:27:16.607 "esnap_clone": false 00:27:16.607 } 00:27:16.607 } 00:27:16.607 } 00:27:16.607 ]' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3c492bc4-f6d7-4cb1-ad24-084f641af751 --l2p_dram_limit 10' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:16.607 18:34:02 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3c492bc4-f6d7-4cb1-ad24-084f641af751 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:16.866 [2024-07-11 18:34:03.035876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.035926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:16.866 [2024-07-11 18:34:03.035947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:16.866 [2024-07-11 18:34:03.035958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.036033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.036052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.866 [2024-07-11 18:34:03.036067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:16.866 [2024-07-11 18:34:03.036077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.036160] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:16.866 [2024-07-11 18:34:03.036508] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:16.866 [2024-07-11 18:34:03.036538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.036549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.866 [2024-07-11 18:34:03.036571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:27:16.866 [2024-07-11 18:34:03.036581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.036700] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8dbb777d-c192-43ab-944a-cae90757a7a8 00:27:16.866 [2024-07-11 18:34:03.037639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.037693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:16.866 [2024-07-11 18:34:03.037708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:16.866 [2024-07-11 18:34:03.037729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.041825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.041889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.866 [2024-07-11 18:34:03.041904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:27:16.866 [2024-07-11 18:34:03.041916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.041999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.042024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.866 [2024-07-11 18:34:03.042036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:16.866 [2024-07-11 18:34:03.042048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.042119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.042138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:16.866 [2024-07-11 18:34:03.042150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:16.866 [2024-07-11 18:34:03.042161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.042191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:16.866 [2024-07-11 18:34:03.043596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.043633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.866 [2024-07-11 18:34:03.043652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:27:16.866 [2024-07-11 18:34:03.043663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.043708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.043724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:16.866 [2024-07-11 18:34:03.043753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:16.866 [2024-07-11 18:34:03.043764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.043805] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:16.866 [2024-07-11 18:34:03.043988] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:16.866 [2024-07-11 18:34:03.044008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:16.866 [2024-07-11 18:34:03.044029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:16.866 [2024-07-11 18:34:03.044044] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044055] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044067] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:16.866 [2024-07-11 18:34:03.044078] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:16.866 [2024-07-11 18:34:03.044089] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:16.866 [2024-07-11 18:34:03.044098] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:16.866 [2024-07-11 18:34:03.044110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.044119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:16.866 [2024-07-11 18:34:03.044131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:27:16.866 [2024-07-11 18:34:03.044147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.044268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.866 [2024-07-11 18:34:03.044283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:16.866 [2024-07-11 18:34:03.044298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:16.866 [2024-07-11 18:34:03.044308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.866 [2024-07-11 18:34:03.044414] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:16.866 [2024-07-11 18:34:03.044432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:16.866 [2024-07-11 18:34:03.044444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:16.866 [2024-07-11 18:34:03.044475] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:16.866 [2024-07-11 18:34:03.044524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.866 [2024-07-11 18:34:03.044542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:16.866 [2024-07-11 18:34:03.044552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:16.866 [2024-07-11 18:34:03.044562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:16.866 [2024-07-11 18:34:03.044570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:16.866 [2024-07-11 18:34:03.044582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:16.866 [2024-07-11 18:34:03.044591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:16.866 [2024-07-11 18:34:03.044610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:16.866 [2024-07-11 18:34:03.044640] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:16.866 [2024-07-11 18:34:03.044667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:16.866 [2024-07-11 18:34:03.044695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044703] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:16.866 [2024-07-11 18:34:03.044722] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:16.866 [2024-07-11 18:34:03.044742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:16.866 [2024-07-11 18:34:03.044753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:16.866 [2024-07-11 18:34:03.044762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.866 [2024-07-11 18:34:03.044772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:16.866 [2024-07-11 18:34:03.044781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:16.867 [2024-07-11 18:34:03.044790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:16.867 [2024-07-11 18:34:03.044799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:16.867 [2024-07-11 18:34:03.044809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:16.867 [2024-07-11 18:34:03.044817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.867 [2024-07-11 18:34:03.044827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:16.867 [2024-07-11 18:34:03.044836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:16.867 [2024-07-11 18:34:03.044845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.867 [2024-07-11 18:34:03.044853] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:16.867 [2024-07-11 18:34:03.044874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:16.867 [2024-07-11 18:34:03.044886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:16.867 [2024-07-11 18:34:03.044899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:16.867 [2024-07-11 18:34:03.044909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:16.867 [2024-07-11 18:34:03.044920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:16.867 [2024-07-11 18:34:03.044928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:16.867 [2024-07-11 18:34:03.044939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:16.867 [2024-07-11 18:34:03.044948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:16.867 [2024-07-11 18:34:03.044958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:16.867 [2024-07-11 18:34:03.044981] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:16.867 [2024-07-11 18:34:03.044997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.867 [2024-07-11 18:34:03.045008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:16.867 [2024-07-11 18:34:03.045019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:16.867 [2024-07-11 18:34:03.045028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:16.867 [2024-07-11 18:34:03.045039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:16.867 [2024-07-11 18:34:03.045048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:16.867 [2024-07-11 18:34:03.045059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:16.867 [2024-07-11 18:34:03.045069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:16.867 [2024-07-11 18:34:03.045082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:16.867 [2024-07-11 18:34:03.045091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:16.867 [2024-07-11 18:34:03.045102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:16.867 [2024-07-11 18:34:03.045411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:16.867 [2024-07-11 18:34:03.045490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:16.867 [2024-07-11 18:34:03.045614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:16.867 [2024-07-11 18:34:03.045682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:16.867 [2024-07-11 18:34:03.045788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:16.867 [2024-07-11 18:34:03.045856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:16.867 [2024-07-11 18:34:03.045910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:16.867 [2024-07-11 18:34:03.046031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:16.867 [2024-07-11 18:34:03.046153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:16.867 [2024-07-11 18:34:03.046288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:16.867 [2024-07-11 18:34:03.046404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.867 [2024-07-11 18:34:03.046513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:16.867 [2024-07-11 18:34:03.046611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:27:16.867 [2024-07-11 18:34:03.046664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.867 [2024-07-11 18:34:03.046853] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:16.867 [2024-07-11 18:34:03.046926] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:19.399 [2024-07-11 18:34:05.202366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.202673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:19.399 [2024-07-11 18:34:05.202810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2155.526 ms 00:27:19.399 [2024-07-11 18:34:05.202837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.209651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.209698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.399 [2024-07-11 18:34:05.209716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.721 ms 00:27:19.399 [2024-07-11 18:34:05.209728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.209825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.209845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:19.399 [2024-07-11 18:34:05.209856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:19.399 [2024-07-11 18:34:05.209867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.217202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.217253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.399 [2024-07-11 18:34:05.217269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.266 ms 00:27:19.399 [2024-07-11 18:34:05.217280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.217320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.217337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.399 [2024-07-11 18:34:05.217349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:19.399 [2024-07-11 18:34:05.217359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.217666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.217685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.399 [2024-07-11 18:34:05.217696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:27:19.399 [2024-07-11 18:34:05.217707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.217828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.217848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.399 [2024-07-11 18:34:05.217859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:27:19.399 [2024-07-11 18:34:05.217870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.223036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.223076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.399 [2024-07-11 18:34:05.223123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.144 ms 00:27:19.399 [2024-07-11 18:34:05.223135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.230596] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:19.399 [2024-07-11 18:34:05.233156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.233186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:19.399 [2024-07-11 18:34:05.233203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.910 ms 00:27:19.399 [2024-07-11 18:34:05.233213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.303643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.303718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:19.399 [2024-07-11 18:34:05.303805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.390 ms 00:27:19.399 [2024-07-11 18:34:05.303831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.304030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.304053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:19.399 [2024-07-11 18:34:05.304067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:27:19.399 [2024-07-11 18:34:05.304120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.307658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.307705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:19.399 [2024-07-11 18:34:05.307754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.489 ms 00:27:19.399 [2024-07-11 18:34:05.307777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.310872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.310908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:19.399 [2024-07-11 18:34:05.310925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:27:19.399 [2024-07-11 18:34:05.310935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.399 [2024-07-11 18:34:05.311283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.399 [2024-07-11 18:34:05.311304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:19.399 [2024-07-11 18:34:05.311317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:27:19.399 [2024-07-11 18:34:05.311327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.400 [2024-07-11 18:34:05.348041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.400 [2024-07-11 18:34:05.348141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:19.400 [2024-07-11 18:34:05.348165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.663 ms 00:27:19.400 [2024-07-11 18:34:05.348179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.400 [2024-07-11 18:34:05.352415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.400 [2024-07-11 18:34:05.352451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:19.400 [2024-07-11 18:34:05.352485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.170 ms 00:27:19.400 [2024-07-11 18:34:05.352504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.400 [2024-07-11 18:34:05.356139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.400 [2024-07-11 18:34:05.356173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:19.400 [2024-07-11 18:34:05.356205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.589 ms 00:27:19.400 [2024-07-11 18:34:05.356215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.400 [2024-07-11 18:34:05.360015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.400 [2024-07-11 18:34:05.360053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:19.400 [2024-07-11 18:34:05.360071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:27:19.400 [2024-07-11 18:34:05.360112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.400 [2024-07-11 18:34:05.360168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.400 [2024-07-11 18:34:05.360200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:19.400 [2024-07-11 18:34:05.360214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:19.400 [2024-07-11 18:34:05.360225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.400 [2024-07-11 18:34:05.360295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.400 [2024-07-11 18:34:05.360318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:19.400 [2024-07-11 18:34:05.360331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:19.400 [2024-07-11 18:34:05.360343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.400 [2024-07-11 18:34:05.361600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2325.187 ms, result 0 00:27:19.400 { 00:27:19.400 "name": "ftl0", 00:27:19.400 "uuid": "8dbb777d-c192-43ab-944a-cae90757a7a8" 00:27:19.400 } 00:27:19.400 18:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:19.400 18:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:19.400 18:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:19.400 18:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:19.660 [2024-07-11 18:34:05.841621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.841691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:19.660 [2024-07-11 18:34:05.841713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:19.660 [2024-07-11 18:34:05.841725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.841756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:19.660 [2024-07-11 18:34:05.842166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.842189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:19.660 [2024-07-11 18:34:05.842206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:27:19.660 [2024-07-11 18:34:05.842216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.842546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.842568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:19.660 [2024-07-11 18:34:05.842582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:27:19.660 [2024-07-11 18:34:05.842594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.845523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.845551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:19.660 [2024-07-11 18:34:05.845583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:27:19.660 [2024-07-11 18:34:05.845594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.850975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.851002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:19.660 [2024-07-11 18:34:05.851034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.355 ms 00:27:19.660 [2024-07-11 18:34:05.851043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.852680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.852730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:19.660 [2024-07-11 18:34:05.852764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:27:19.660 [2024-07-11 18:34:05.852775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.857005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.857043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:19.660 [2024-07-11 18:34:05.857077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:27:19.660 [2024-07-11 18:34:05.857088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.857244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.857264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:19.660 [2024-07-11 18:34:05.857281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:27:19.660 [2024-07-11 18:34:05.857291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.859131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.859206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:19.660 [2024-07-11 18:34:05.859223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:27:19.660 [2024-07-11 18:34:05.859233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.860786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.860837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:19.660 [2024-07-11 18:34:05.860871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:27:19.660 [2024-07-11 18:34:05.860881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.862163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.862232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:19.660 [2024-07-11 18:34:05.862251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:27:19.660 [2024-07-11 18:34:05.862263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.863381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.660 [2024-07-11 18:34:05.863453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:19.660 [2024-07-11 18:34:05.863472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:27:19.660 [2024-07-11 18:34:05.863482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.660 [2024-07-11 18:34:05.863526] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:19.660 [2024-07-11 18:34:05.863548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:19.660 [2024-07-11 18:34:05.863689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.863999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:19.661 [2024-07-11 18:34:05.864830] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:19.661 [2024-07-11 18:34:05.864844] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8dbb777d-c192-43ab-944a-cae90757a7a8 00:27:19.661 [2024-07-11 18:34:05.864855] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:19.661 [2024-07-11 18:34:05.864867] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:19.661 [2024-07-11 18:34:05.864877] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:19.661 [2024-07-11 18:34:05.864888] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:19.661 [2024-07-11 18:34:05.864898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:19.661 [2024-07-11 18:34:05.864914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:19.661 [2024-07-11 18:34:05.864924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:19.661 [2024-07-11 18:34:05.864935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:19.661 [2024-07-11 18:34:05.864944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:19.661 [2024-07-11 18:34:05.864956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.661 [2024-07-11 18:34:05.864967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:19.661 [2024-07-11 18:34:05.864979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:27:19.661 [2024-07-11 18:34:05.864990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.866622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.661 [2024-07-11 18:34:05.866684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:19.661 [2024-07-11 18:34:05.866732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:27:19.661 [2024-07-11 18:34:05.866771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.866965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.661 [2024-07-11 18:34:05.867111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:19.661 [2024-07-11 18:34:05.867224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:19.661 [2024-07-11 18:34:05.867272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.872429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.872617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.661 [2024-07-11 18:34:05.872746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.872796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.872957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.873066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.661 [2024-07-11 18:34:05.873205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.873343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.873480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.873540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.661 [2024-07-11 18:34:05.873651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.873784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.873863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.873977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.661 [2024-07-11 18:34:05.874036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.874173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.881847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.882125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.661 [2024-07-11 18:34:05.882244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.882397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.888566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.888780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.661 [2024-07-11 18:34:05.888898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.888945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.889178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.889317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:19.661 [2024-07-11 18:34:05.889432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.889570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.889649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.889667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:19.661 [2024-07-11 18:34:05.889680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.889691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.889785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.661 [2024-07-11 18:34:05.889803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:19.661 [2024-07-11 18:34:05.889817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.661 [2024-07-11 18:34:05.889828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.661 [2024-07-11 18:34:05.889885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.662 [2024-07-11 18:34:05.889905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:19.662 [2024-07-11 18:34:05.889919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.662 [2024-07-11 18:34:05.889932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.662 [2024-07-11 18:34:05.889980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.662 [2024-07-11 18:34:05.889995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:19.662 [2024-07-11 18:34:05.890010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.662 [2024-07-11 18:34:05.890021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.662 [2024-07-11 18:34:05.890076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.662 [2024-07-11 18:34:05.890285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:19.662 [2024-07-11 18:34:05.890391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.662 [2024-07-11 18:34:05.890438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.662 [2024-07-11 18:34:05.890647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.972 ms, result 0 00:27:19.662 true 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96619 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 96619 ']' 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 96619 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96619 00:27:19.662 killing process with pid 96619 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96619' 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 96619 00:27:19.662 18:34:05 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 96619 00:27:23.014 18:34:08 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:26.303 262144+0 records in 00:27:26.303 262144+0 records out 00:27:26.303 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.7825 s, 284 MB/s 00:27:26.303 18:34:12 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:28.204 18:34:14 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:28.204 [2024-07-11 18:34:14.409264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:28.204 [2024-07-11 18:34:14.409456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96812 ] 00:27:28.204 [2024-07-11 18:34:14.559325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.204 [2024-07-11 18:34:14.607244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.464 [2024-07-11 18:34:14.700516] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.464 [2024-07-11 18:34:14.700622] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:28.464 [2024-07-11 18:34:14.857182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.857225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:28.464 [2024-07-11 18:34:14.857251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.464 [2024-07-11 18:34:14.857262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.857327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.857344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:28.464 [2024-07-11 18:34:14.857358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:28.464 [2024-07-11 18:34:14.857367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.857394] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:28.464 [2024-07-11 18:34:14.857640] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:28.464 [2024-07-11 18:34:14.857665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.857676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:28.464 [2024-07-11 18:34:14.857694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:27:28.464 [2024-07-11 18:34:14.857707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.858849] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:28.464 [2024-07-11 18:34:14.860950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.860987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:28.464 [2024-07-11 18:34:14.861023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:27:28.464 [2024-07-11 18:34:14.861033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.861103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.861121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:28.464 [2024-07-11 18:34:14.861132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:28.464 [2024-07-11 18:34:14.861141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.865308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.865349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:28.464 [2024-07-11 18:34:14.865362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.103 ms 00:27:28.464 [2024-07-11 18:34:14.865372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.865459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.865476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:28.464 [2024-07-11 18:34:14.865486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:28.464 [2024-07-11 18:34:14.865496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.865574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.865590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:28.464 [2024-07-11 18:34:14.865614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:28.464 [2024-07-11 18:34:14.865624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.865653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:28.464 [2024-07-11 18:34:14.866897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.866929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:28.464 [2024-07-11 18:34:14.866959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:27:28.464 [2024-07-11 18:34:14.866970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.867007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.867024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:28.464 [2024-07-11 18:34:14.867035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:28.464 [2024-07-11 18:34:14.867060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.867115] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:28.464 [2024-07-11 18:34:14.867323] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:28.464 [2024-07-11 18:34:14.867467] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:28.464 [2024-07-11 18:34:14.867637] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:28.464 [2024-07-11 18:34:14.867758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:28.464 [2024-07-11 18:34:14.867773] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:28.464 [2024-07-11 18:34:14.867809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:28.464 [2024-07-11 18:34:14.867824] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:28.464 [2024-07-11 18:34:14.867836] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:28.464 [2024-07-11 18:34:14.867847] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:28.464 [2024-07-11 18:34:14.867856] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:28.464 [2024-07-11 18:34:14.867866] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:28.464 [2024-07-11 18:34:14.867875] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:28.464 [2024-07-11 18:34:14.867887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.867897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:28.464 [2024-07-11 18:34:14.867913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:27:28.464 [2024-07-11 18:34:14.867923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.868019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.464 [2024-07-11 18:34:14.868042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:28.464 [2024-07-11 18:34:14.868056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:28.464 [2024-07-11 18:34:14.868090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.464 [2024-07-11 18:34:14.868233] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:28.464 [2024-07-11 18:34:14.868251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:28.464 [2024-07-11 18:34:14.868262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.464 [2024-07-11 18:34:14.868279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868290] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:28.465 [2024-07-11 18:34:14.868300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:28.465 [2024-07-11 18:34:14.868329] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868338] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.465 [2024-07-11 18:34:14.868347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:28.465 [2024-07-11 18:34:14.868356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:28.465 [2024-07-11 18:34:14.868365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:28.465 [2024-07-11 18:34:14.868374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:28.465 [2024-07-11 18:34:14.868417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:28.465 [2024-07-11 18:34:14.868427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:28.465 [2024-07-11 18:34:14.868445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:28.465 [2024-07-11 18:34:14.868473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:28.465 [2024-07-11 18:34:14.868499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:28.465 [2024-07-11 18:34:14.868527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:28.465 [2024-07-11 18:34:14.868554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868566] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:28.465 [2024-07-11 18:34:14.868585] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.465 [2024-07-11 18:34:14.868603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:28.465 [2024-07-11 18:34:14.868612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:28.465 [2024-07-11 18:34:14.868621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:28.465 [2024-07-11 18:34:14.868630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:28.465 [2024-07-11 18:34:14.868639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:28.465 [2024-07-11 18:34:14.868648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:28.465 [2024-07-11 18:34:14.868666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:28.465 [2024-07-11 18:34:14.868674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868682] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:28.465 [2024-07-11 18:34:14.868692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:28.465 [2024-07-11 18:34:14.868702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:28.465 [2024-07-11 18:34:14.868732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:28.465 [2024-07-11 18:34:14.868741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:28.465 [2024-07-11 18:34:14.868751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:28.465 [2024-07-11 18:34:14.868760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:28.465 [2024-07-11 18:34:14.868769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:28.465 [2024-07-11 18:34:14.868778] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:28.465 [2024-07-11 18:34:14.868789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:28.465 [2024-07-11 18:34:14.868801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.465 [2024-07-11 18:34:14.868812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:28.465 [2024-07-11 18:34:14.868822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:28.465 [2024-07-11 18:34:14.868832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:28.465 [2024-07-11 18:34:14.868842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:28.465 [2024-07-11 18:34:14.868852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:28.465 [2024-07-11 18:34:14.868861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:28.465 [2024-07-11 18:34:14.868871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:28.465 [2024-07-11 18:34:14.868884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:28.465 [2024-07-11 18:34:14.868894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:28.465 [2024-07-11 18:34:14.868904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:28.465 [2024-07-11 18:34:14.868914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:28.465 [2024-07-11 18:34:14.868923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:28.465 [2024-07-11 18:34:14.868933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:28.465 [2024-07-11 18:34:14.868943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:28.465 [2024-07-11 18:34:14.868953] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:28.465 [2024-07-11 18:34:14.868964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:28.465 [2024-07-11 18:34:14.868984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:28.465 [2024-07-11 18:34:14.868994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:28.465 [2024-07-11 18:34:14.869003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:28.465 [2024-07-11 18:34:14.869013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:28.465 [2024-07-11 18:34:14.869024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.465 [2024-07-11 18:34:14.869041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:28.465 [2024-07-11 18:34:14.869054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:27:28.465 [2024-07-11 18:34:14.869066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.725 [2024-07-11 18:34:14.889705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.725 [2024-07-11 18:34:14.889756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:28.725 [2024-07-11 18:34:14.889774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.587 ms 00:27:28.725 [2024-07-11 18:34:14.889783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.725 [2024-07-11 18:34:14.889877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.725 [2024-07-11 18:34:14.889897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:28.725 [2024-07-11 18:34:14.889909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:28.725 [2024-07-11 18:34:14.889921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.725 [2024-07-11 18:34:14.896988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.725 [2024-07-11 18:34:14.897027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:28.725 [2024-07-11 18:34:14.897041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.982 ms 00:27:28.725 [2024-07-11 18:34:14.897061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.725 [2024-07-11 18:34:14.897151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.725 [2024-07-11 18:34:14.897167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:28.725 [2024-07-11 18:34:14.897184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:28.726 [2024-07-11 18:34:14.897194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.897541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.897566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:28.726 [2024-07-11 18:34:14.897578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:27:28.726 [2024-07-11 18:34:14.897588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.897739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.897770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:28.726 [2024-07-11 18:34:14.897781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:27:28.726 [2024-07-11 18:34:14.897794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.902188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.902224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:28.726 [2024-07-11 18:34:14.902237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.367 ms 00:27:28.726 [2024-07-11 18:34:14.902247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.904543] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:28.726 [2024-07-11 18:34:14.904582] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:28.726 [2024-07-11 18:34:14.904602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.904612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:28.726 [2024-07-11 18:34:14.904622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:27:28.726 [2024-07-11 18:34:14.904634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.917523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.917560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:28.726 [2024-07-11 18:34:14.917574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.849 ms 00:27:28.726 [2024-07-11 18:34:14.917584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.919403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.919464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:28.726 [2024-07-11 18:34:14.919495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:27:28.726 [2024-07-11 18:34:14.919505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.921103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.921165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:28.726 [2024-07-11 18:34:14.921196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:27:28.726 [2024-07-11 18:34:14.921206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.921563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.921589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:28.726 [2024-07-11 18:34:14.921602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:27:28.726 [2024-07-11 18:34:14.921612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.936472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.936539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:28.726 [2024-07-11 18:34:14.936557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.823 ms 00:27:28.726 [2024-07-11 18:34:14.936567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.943334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:28.726 [2024-07-11 18:34:14.945465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.945496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:28.726 [2024-07-11 18:34:14.945518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.845 ms 00:27:28.726 [2024-07-11 18:34:14.945535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.945596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.945622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:28.726 [2024-07-11 18:34:14.945633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:28.726 [2024-07-11 18:34:14.945643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.945732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.945748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:28.726 [2024-07-11 18:34:14.945762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:28.726 [2024-07-11 18:34:14.945771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.945798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.945810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:28.726 [2024-07-11 18:34:14.945820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:28.726 [2024-07-11 18:34:14.945829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.945873] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:28.726 [2024-07-11 18:34:14.945887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.945905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:28.726 [2024-07-11 18:34:14.945921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:28.726 [2024-07-11 18:34:14.945935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.949261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.949297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:28.726 [2024-07-11 18:34:14.949327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:27:28.726 [2024-07-11 18:34:14.949348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.949415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:28.726 [2024-07-11 18:34:14.949432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:28.726 [2024-07-11 18:34:14.949453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:28.726 [2024-07-11 18:34:14.949467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:28.726 [2024-07-11 18:34:14.950729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 92.943 ms, result 0 00:28:12.615  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (23 MBps) Copying: 116/1024 [MB] (23 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (23 MBps) Copying: 186/1024 [MB] (23 MBps) Copying: 209/1024 [MB] (23 MBps) Copying: 233/1024 [MB] (23 MBps) Copying: 256/1024 [MB] (23 MBps) Copying: 279/1024 [MB] (23 MBps) Copying: 303/1024 [MB] (23 MBps) Copying: 327/1024 [MB] (23 MBps) Copying: 350/1024 [MB] (23 MBps) Copying: 373/1024 [MB] (23 MBps) Copying: 397/1024 [MB] (23 MBps) Copying: 420/1024 [MB] (23 MBps) Copying: 444/1024 [MB] (23 MBps) Copying: 467/1024 [MB] (23 MBps) Copying: 490/1024 [MB] (23 MBps) Copying: 514/1024 [MB] (23 MBps) Copying: 537/1024 [MB] (23 MBps) Copying: 561/1024 [MB] (23 MBps) Copying: 585/1024 [MB] (23 MBps) Copying: 608/1024 [MB] (23 MBps) Copying: 631/1024 [MB] (23 MBps) Copying: 654/1024 [MB] (23 MBps) Copying: 677/1024 [MB] (22 MBps) Copying: 701/1024 [MB] (23 MBps) Copying: 725/1024 [MB] (23 MBps) Copying: 748/1024 [MB] (23 MBps) Copying: 771/1024 [MB] (23 MBps) Copying: 795/1024 [MB] (23 MBps) Copying: 819/1024 [MB] (23 MBps) Copying: 842/1024 [MB] (23 MBps) Copying: 866/1024 [MB] (23 MBps) Copying: 889/1024 [MB] (23 MBps) Copying: 913/1024 [MB] (24 MBps) Copying: 937/1024 [MB] (23 MBps) Copying: 961/1024 [MB] (23 MBps) Copying: 984/1024 [MB] (23 MBps) Copying: 1007/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 18:34:58.692018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.615 [2024-07-11 18:34:58.692071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:12.615 [2024-07-11 18:34:58.692129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:12.615 [2024-07-11 18:34:58.692140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.615 [2024-07-11 18:34:58.692184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:12.615 [2024-07-11 18:34:58.692654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.615 [2024-07-11 18:34:58.692683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:12.615 [2024-07-11 18:34:58.692697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:28:12.615 [2024-07-11 18:34:58.692707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.615 [2024-07-11 18:34:58.694286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.615 [2024-07-11 18:34:58.694353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:12.615 [2024-07-11 18:34:58.694383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:28:12.615 [2024-07-11 18:34:58.694400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.615 [2024-07-11 18:34:58.694435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.615 [2024-07-11 18:34:58.694456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:12.615 [2024-07-11 18:34:58.694466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:12.615 [2024-07-11 18:34:58.694475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.615 [2024-07-11 18:34:58.694522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.615 [2024-07-11 18:34:58.694542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:12.615 [2024-07-11 18:34:58.694552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:12.615 [2024-07-11 18:34:58.694561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.615 [2024-07-11 18:34:58.694589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:12.615 [2024-07-11 18:34:58.694608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:12.615 [2024-07-11 18:34:58.694620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.694998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:12.616 [2024-07-11 18:34:58.695881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:12.617 [2024-07-11 18:34:58.695892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:12.617 [2024-07-11 18:34:58.695903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:12.617 [2024-07-11 18:34:58.695936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:12.617 [2024-07-11 18:34:58.695946] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8dbb777d-c192-43ab-944a-cae90757a7a8 00:28:12.617 [2024-07-11 18:34:58.695956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:12.617 [2024-07-11 18:34:58.695966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:12.617 [2024-07-11 18:34:58.695975] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:12.617 [2024-07-11 18:34:58.695985] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:12.617 [2024-07-11 18:34:58.695994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:12.617 [2024-07-11 18:34:58.696004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:12.617 [2024-07-11 18:34:58.696014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:12.617 [2024-07-11 18:34:58.696023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:12.617 [2024-07-11 18:34:58.696032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:12.617 [2024-07-11 18:34:58.696042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.617 [2024-07-11 18:34:58.696057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:12.617 [2024-07-11 18:34:58.696068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:28:12.617 [2024-07-11 18:34:58.696086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.697300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.617 [2024-07-11 18:34:58.697325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:12.617 [2024-07-11 18:34:58.697338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:28:12.617 [2024-07-11 18:34:58.697348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.697441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:12.617 [2024-07-11 18:34:58.697460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:12.617 [2024-07-11 18:34:58.697472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:12.617 [2024-07-11 18:34:58.697481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.701722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.701883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:12.617 [2024-07-11 18:34:58.702036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.702097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.702286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.702409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:12.617 [2024-07-11 18:34:58.702518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.702566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.702752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.702813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:12.617 [2024-07-11 18:34:58.702854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.702888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.702992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.703047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:12.617 [2024-07-11 18:34:58.703109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.703150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.710675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.710902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:12.617 [2024-07-11 18:34:58.711010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.711056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.717684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.717883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:12.617 [2024-07-11 18:34:58.717991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.718123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.718194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.718314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:12.617 [2024-07-11 18:34:58.718366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.718461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.718619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.718737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:12.617 [2024-07-11 18:34:58.718850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.718959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.719090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.719148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:12.617 [2024-07-11 18:34:58.719248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.719296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.719366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.719450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:12.617 [2024-07-11 18:34:58.719522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.719573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.719641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.719748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:12.617 [2024-07-11 18:34:58.719792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.719827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.719912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:12.617 [2024-07-11 18:34:58.720006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:12.617 [2024-07-11 18:34:58.720054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:12.617 [2024-07-11 18:34:58.720103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:12.617 [2024-07-11 18:34:58.720261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.209 ms, result 0 00:28:12.617 00:28:12.617 00:28:12.617 18:34:58 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:12.877 [2024-07-11 18:34:59.032984] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:12.877 [2024-07-11 18:34:59.033199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97245 ] 00:28:12.877 [2024-07-11 18:34:59.178794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.877 [2024-07-11 18:34:59.218297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.138 [2024-07-11 18:34:59.300760] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.138 [2024-07-11 18:34:59.300840] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.138 [2024-07-11 18:34:59.454461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.454504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:13.138 [2024-07-11 18:34:59.454521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:13.138 [2024-07-11 18:34:59.454540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.454605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.454626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.138 [2024-07-11 18:34:59.454640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:13.138 [2024-07-11 18:34:59.454649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.454676] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:13.138 [2024-07-11 18:34:59.454919] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:13.138 [2024-07-11 18:34:59.454943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.454966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.138 [2024-07-11 18:34:59.454977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:28:13.138 [2024-07-11 18:34:59.454995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.455558] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:13.138 [2024-07-11 18:34:59.455598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.455612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:13.138 [2024-07-11 18:34:59.455631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:13.138 [2024-07-11 18:34:59.455641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.455695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.455719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:13.138 [2024-07-11 18:34:59.455731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:13.138 [2024-07-11 18:34:59.455740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.456163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.456182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:13.138 [2024-07-11 18:34:59.456198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:28:13.138 [2024-07-11 18:34:59.456208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.456287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.456304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:13.138 [2024-07-11 18:34:59.456320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:13.138 [2024-07-11 18:34:59.456329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.456359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.456376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:13.138 [2024-07-11 18:34:59.456385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:13.138 [2024-07-11 18:34:59.456394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.456420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:13.138 [2024-07-11 18:34:59.457672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.457690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:13.138 [2024-07-11 18:34:59.457710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:28:13.138 [2024-07-11 18:34:59.457722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.457755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.457768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:13.138 [2024-07-11 18:34:59.457778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:13.138 [2024-07-11 18:34:59.457790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.457812] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:13.138 [2024-07-11 18:34:59.457834] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:13.138 [2024-07-11 18:34:59.457876] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:13.138 [2024-07-11 18:34:59.457900] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:13.138 [2024-07-11 18:34:59.457980] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:13.138 [2024-07-11 18:34:59.458001] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:13.138 [2024-07-11 18:34:59.458016] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:13.138 [2024-07-11 18:34:59.458028] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:13.138 [2024-07-11 18:34:59.458038] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:13.138 [2024-07-11 18:34:59.458047] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:13.138 [2024-07-11 18:34:59.458056] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:13.138 [2024-07-11 18:34:59.458066] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:13.138 [2024-07-11 18:34:59.458075] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:13.138 [2024-07-11 18:34:59.458103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.458113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:13.138 [2024-07-11 18:34:59.458132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:28:13.138 [2024-07-11 18:34:59.458141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.458213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.138 [2024-07-11 18:34:59.458225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:13.138 [2024-07-11 18:34:59.458235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:13.138 [2024-07-11 18:34:59.458242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.138 [2024-07-11 18:34:59.458341] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:13.138 [2024-07-11 18:34:59.458358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:13.138 [2024-07-11 18:34:59.458368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.138 [2024-07-11 18:34:59.458389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.138 [2024-07-11 18:34:59.458398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:13.138 [2024-07-11 18:34:59.458406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:13.138 [2024-07-11 18:34:59.458417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:13.138 [2024-07-11 18:34:59.458426] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:13.138 [2024-07-11 18:34:59.458434] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:13.138 [2024-07-11 18:34:59.458442] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.138 [2024-07-11 18:34:59.458450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:13.138 [2024-07-11 18:34:59.458458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:13.138 [2024-07-11 18:34:59.458465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.139 [2024-07-11 18:34:59.458473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:13.139 [2024-07-11 18:34:59.458481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:13.139 [2024-07-11 18:34:59.458489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:13.139 [2024-07-11 18:34:59.458506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:13.139 [2024-07-11 18:34:59.458513] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:13.139 [2024-07-11 18:34:59.458529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.139 [2024-07-11 18:34:59.458546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:13.139 [2024-07-11 18:34:59.458556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.139 [2024-07-11 18:34:59.458571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:13.139 [2024-07-11 18:34:59.458579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.139 [2024-07-11 18:34:59.458594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:13.139 [2024-07-11 18:34:59.458602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.139 [2024-07-11 18:34:59.458617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:13.139 [2024-07-11 18:34:59.458625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458633] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.139 [2024-07-11 18:34:59.458640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:13.139 [2024-07-11 18:34:59.458648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:13.139 [2024-07-11 18:34:59.458656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.139 [2024-07-11 18:34:59.458663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:13.139 [2024-07-11 18:34:59.458675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:13.139 [2024-07-11 18:34:59.458683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:13.139 [2024-07-11 18:34:59.458700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:13.139 [2024-07-11 18:34:59.458707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458715] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:13.139 [2024-07-11 18:34:59.458726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:13.139 [2024-07-11 18:34:59.458734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.139 [2024-07-11 18:34:59.458743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.139 [2024-07-11 18:34:59.458752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:13.139 [2024-07-11 18:34:59.458760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:13.139 [2024-07-11 18:34:59.458768] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:13.139 [2024-07-11 18:34:59.458777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:13.139 [2024-07-11 18:34:59.458785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:13.139 [2024-07-11 18:34:59.458793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:13.139 [2024-07-11 18:34:59.458802] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:13.139 [2024-07-11 18:34:59.458823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.139 [2024-07-11 18:34:59.458834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:13.139 [2024-07-11 18:34:59.458843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:13.139 [2024-07-11 18:34:59.458852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:13.139 [2024-07-11 18:34:59.458861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:13.139 [2024-07-11 18:34:59.458870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:13.139 [2024-07-11 18:34:59.458878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:13.139 [2024-07-11 18:34:59.458887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:13.139 [2024-07-11 18:34:59.458895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:13.139 [2024-07-11 18:34:59.458904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:13.139 [2024-07-11 18:34:59.458913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:13.139 [2024-07-11 18:34:59.458921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:13.139 [2024-07-11 18:34:59.458930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:13.139 [2024-07-11 18:34:59.458939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:13.139 [2024-07-11 18:34:59.458947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:13.139 [2024-07-11 18:34:59.458956] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:13.139 [2024-07-11 18:34:59.458967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.139 [2024-07-11 18:34:59.458984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:13.139 [2024-07-11 18:34:59.458994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:13.139 [2024-07-11 18:34:59.459010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:13.139 [2024-07-11 18:34:59.459019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:13.139 [2024-07-11 18:34:59.459029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.459044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:13.139 [2024-07-11 18:34:59.459054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:28:13.139 [2024-07-11 18:34:59.459062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.139 [2024-07-11 18:34:59.475962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.476180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:13.139 [2024-07-11 18:34:59.476310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.292 ms 00:28:13.139 [2024-07-11 18:34:59.476360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.139 [2024-07-11 18:34:59.476490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.476544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:13.139 [2024-07-11 18:34:59.476654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:13.139 [2024-07-11 18:34:59.476701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.139 [2024-07-11 18:34:59.483901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.484095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:13.139 [2024-07-11 18:34:59.484213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:28:13.139 [2024-07-11 18:34:59.484274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.139 [2024-07-11 18:34:59.484402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.484458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:13.139 [2024-07-11 18:34:59.484507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:13.139 [2024-07-11 18:34:59.484548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.139 [2024-07-11 18:34:59.484773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.484902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:13.139 [2024-07-11 18:34:59.484999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:13.139 [2024-07-11 18:34:59.485114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.139 [2024-07-11 18:34:59.485291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.485348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:13.139 [2024-07-11 18:34:59.485460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:28:13.139 [2024-07-11 18:34:59.485571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.139 [2024-07-11 18:34:59.489983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.139 [2024-07-11 18:34:59.490182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:13.139 [2024-07-11 18:34:59.490305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.343 ms 00:28:13.140 [2024-07-11 18:34:59.490417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.490617] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:13.140 [2024-07-11 18:34:59.490759] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:13.140 [2024-07-11 18:34:59.490958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.491004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:13.140 [2024-07-11 18:34:59.491159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:28:13.140 [2024-07-11 18:34:59.491220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.502999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.503183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:13.140 [2024-07-11 18:34:59.503293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.668 ms 00:28:13.140 [2024-07-11 18:34:59.503348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.503509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.503581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:13.140 [2024-07-11 18:34:59.503633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:13.140 [2024-07-11 18:34:59.503669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.503757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.503873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:13.140 [2024-07-11 18:34:59.503915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:13.140 [2024-07-11 18:34:59.503948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.504321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.504486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:13.140 [2024-07-11 18:34:59.504595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:28:13.140 [2024-07-11 18:34:59.504747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.504813] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:13.140 [2024-07-11 18:34:59.505056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.505116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:13.140 [2024-07-11 18:34:59.505214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:28:13.140 [2024-07-11 18:34:59.505230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.513069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:13.140 [2024-07-11 18:34:59.513385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.513512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:13.140 [2024-07-11 18:34:59.513626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.124 ms 00:28:13.140 [2024-07-11 18:34:59.513731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.515877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.516023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:13.140 [2024-07-11 18:34:59.516142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:28:13.140 [2024-07-11 18:34:59.516260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.516398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.516455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:13.140 [2024-07-11 18:34:59.516559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:13.140 [2024-07-11 18:34:59.516604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.516730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.516781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:13.140 [2024-07-11 18:34:59.516832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:13.140 [2024-07-11 18:34:59.516902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.516968] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:13.140 [2024-07-11 18:34:59.517071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.517187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:13.140 [2024-07-11 18:34:59.517284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:13.140 [2024-07-11 18:34:59.517426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.521156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.521343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:13.140 [2024-07-11 18:34:59.521454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:28:13.140 [2024-07-11 18:34:59.521565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.521677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.140 [2024-07-11 18:34:59.521769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:13.140 [2024-07-11 18:34:59.521866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:13.140 [2024-07-11 18:34:59.521911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.140 [2024-07-11 18:34:59.523253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.294 ms, result 0 00:28:59.932  Copying: 21/1024 [MB] (21 MBps) Copying: 43/1024 [MB] (21 MBps) Copying: 65/1024 [MB] (21 MBps) Copying: 87/1024 [MB] (21 MBps) Copying: 108/1024 [MB] (21 MBps) Copying: 130/1024 [MB] (21 MBps) Copying: 153/1024 [MB] (22 MBps) Copying: 175/1024 [MB] (22 MBps) Copying: 197/1024 [MB] (22 MBps) Copying: 219/1024 [MB] (22 MBps) Copying: 242/1024 [MB] (22 MBps) Copying: 264/1024 [MB] (22 MBps) Copying: 286/1024 [MB] (22 MBps) Copying: 308/1024 [MB] (22 MBps) Copying: 331/1024 [MB] (22 MBps) Copying: 353/1024 [MB] (22 MBps) Copying: 376/1024 [MB] (22 MBps) Copying: 398/1024 [MB] (22 MBps) Copying: 420/1024 [MB] (22 MBps) Copying: 442/1024 [MB] (22 MBps) Copying: 464/1024 [MB] (22 MBps) Copying: 486/1024 [MB] (22 MBps) Copying: 508/1024 [MB] (22 MBps) Copying: 531/1024 [MB] (22 MBps) Copying: 553/1024 [MB] (22 MBps) Copying: 575/1024 [MB] (21 MBps) Copying: 597/1024 [MB] (21 MBps) Copying: 619/1024 [MB] (21 MBps) Copying: 641/1024 [MB] (21 MBps) Copying: 662/1024 [MB] (21 MBps) Copying: 684/1024 [MB] (22 MBps) Copying: 707/1024 [MB] (22 MBps) Copying: 728/1024 [MB] (21 MBps) Copying: 750/1024 [MB] (21 MBps) Copying: 772/1024 [MB] (22 MBps) Copying: 794/1024 [MB] (22 MBps) Copying: 817/1024 [MB] (22 MBps) Copying: 839/1024 [MB] (22 MBps) Copying: 862/1024 [MB] (22 MBps) Copying: 884/1024 [MB] (22 MBps) Copying: 906/1024 [MB] (22 MBps) Copying: 928/1024 [MB] (22 MBps) Copying: 950/1024 [MB] (22 MBps) Copying: 972/1024 [MB] (21 MBps) Copying: 994/1024 [MB] (22 MBps) Copying: 1016/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-11 18:35:46.259220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.932 [2024-07-11 18:35:46.259305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:59.932 [2024-07-11 18:35:46.259328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:59.932 [2024-07-11 18:35:46.259349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.932 [2024-07-11 18:35:46.259380] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:59.932 [2024-07-11 18:35:46.259890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.932 [2024-07-11 18:35:46.259911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:59.932 [2024-07-11 18:35:46.259923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:28:59.932 [2024-07-11 18:35:46.259935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.932 [2024-07-11 18:35:46.260198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.932 [2024-07-11 18:35:46.260219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:59.932 [2024-07-11 18:35:46.260231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:28:59.932 [2024-07-11 18:35:46.260242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.932 [2024-07-11 18:35:46.260297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.932 [2024-07-11 18:35:46.260316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:59.932 [2024-07-11 18:35:46.260328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:59.932 [2024-07-11 18:35:46.260348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.932 [2024-07-11 18:35:46.260415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.932 [2024-07-11 18:35:46.260430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:59.932 [2024-07-11 18:35:46.260442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:59.932 [2024-07-11 18:35:46.260463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.932 [2024-07-11 18:35:46.260482] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:59.932 [2024-07-11 18:35:46.260506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:59.932 [2024-07-11 18:35:46.260780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.260990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:59.933 [2024-07-11 18:35:46.261572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:59.934 [2024-07-11 18:35:46.261684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:59.934 [2024-07-11 18:35:46.261695] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8dbb777d-c192-43ab-944a-cae90757a7a8 00:28:59.934 [2024-07-11 18:35:46.261706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:59.934 [2024-07-11 18:35:46.261717] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:59.934 [2024-07-11 18:35:46.261727] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:59.934 [2024-07-11 18:35:46.261738] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:59.934 [2024-07-11 18:35:46.261758] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:59.934 [2024-07-11 18:35:46.261770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:59.934 [2024-07-11 18:35:46.261789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:59.934 [2024-07-11 18:35:46.261799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:59.934 [2024-07-11 18:35:46.261809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:59.934 [2024-07-11 18:35:46.261820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.934 [2024-07-11 18:35:46.261831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:59.934 [2024-07-11 18:35:46.261849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:28:59.934 [2024-07-11 18:35:46.261859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.263326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.934 [2024-07-11 18:35:46.263363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:59.934 [2024-07-11 18:35:46.263378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.444 ms 00:28:59.934 [2024-07-11 18:35:46.263388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.263471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.934 [2024-07-11 18:35:46.263491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:59.934 [2024-07-11 18:35:46.263503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:59.934 [2024-07-11 18:35:46.263525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.268338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.268383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:59.934 [2024-07-11 18:35:46.268396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.268406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.268466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.268486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:59.934 [2024-07-11 18:35:46.268496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.268506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.268544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.268561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:59.934 [2024-07-11 18:35:46.268572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.268581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.268600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.268613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:59.934 [2024-07-11 18:35:46.268628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.268638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.278446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.278531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:59.934 [2024-07-11 18:35:46.278547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.278557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.285889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.285952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:59.934 [2024-07-11 18:35:46.285966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.285975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.286035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.286050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:59.934 [2024-07-11 18:35:46.286060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.286068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.286108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.286122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:59.934 [2024-07-11 18:35:46.286132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.286145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.286220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.286252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:59.934 [2024-07-11 18:35:46.286278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.286297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.286339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.286355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:59.934 [2024-07-11 18:35:46.286365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.286375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.286421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.286443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:59.934 [2024-07-11 18:35:46.286454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.286463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.286517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.934 [2024-07-11 18:35:46.286535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:59.934 [2024-07-11 18:35:46.286545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.934 [2024-07-11 18:35:46.286559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.934 [2024-07-11 18:35:46.286685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.437 ms, result 0 00:29:00.193 00:29:00.193 00:29:00.193 18:35:46 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:02.096 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:02.097 18:35:48 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:02.097 [2024-07-11 18:35:48.346133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:29:02.097 [2024-07-11 18:35:48.346323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97723 ] 00:29:02.097 [2024-07-11 18:35:48.497797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.356 [2024-07-11 18:35:48.539996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.356 [2024-07-11 18:35:48.633944] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.356 [2024-07-11 18:35:48.634053] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.616 [2024-07-11 18:35:48.794744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.794803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:02.616 [2024-07-11 18:35:48.794825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:02.616 [2024-07-11 18:35:48.794839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.794933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.794956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:02.616 [2024-07-11 18:35:48.794990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:29:02.616 [2024-07-11 18:35:48.795003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.795041] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:02.616 [2024-07-11 18:35:48.795398] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:02.616 [2024-07-11 18:35:48.795445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.795472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:02.616 [2024-07-11 18:35:48.795487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:29:02.616 [2024-07-11 18:35:48.795504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.796069] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:02.616 [2024-07-11 18:35:48.796123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.796154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:02.616 [2024-07-11 18:35:48.796174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:02.616 [2024-07-11 18:35:48.796188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.796266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.796285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:02.616 [2024-07-11 18:35:48.796300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:02.616 [2024-07-11 18:35:48.796312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.796787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.796822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:02.616 [2024-07-11 18:35:48.796854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:29:02.616 [2024-07-11 18:35:48.796869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.796982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.797012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:02.616 [2024-07-11 18:35:48.797027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:02.616 [2024-07-11 18:35:48.797041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.797103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.797131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:02.616 [2024-07-11 18:35:48.797146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:02.616 [2024-07-11 18:35:48.797173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.797215] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:02.616 [2024-07-11 18:35:48.798878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.798918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:02.616 [2024-07-11 18:35:48.798950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:29:02.616 [2024-07-11 18:35:48.798968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.799028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.799047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:02.616 [2024-07-11 18:35:48.799061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:02.616 [2024-07-11 18:35:48.799120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.799176] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:02.616 [2024-07-11 18:35:48.799212] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:02.616 [2024-07-11 18:35:48.799271] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:02.616 [2024-07-11 18:35:48.799301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:02.616 [2024-07-11 18:35:48.799437] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:02.616 [2024-07-11 18:35:48.799463] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:02.616 [2024-07-11 18:35:48.799489] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:02.616 [2024-07-11 18:35:48.799507] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:02.616 [2024-07-11 18:35:48.799537] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:02.616 [2024-07-11 18:35:48.799553] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:02.616 [2024-07-11 18:35:48.799566] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:02.616 [2024-07-11 18:35:48.799583] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:02.616 [2024-07-11 18:35:48.799608] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:02.616 [2024-07-11 18:35:48.799633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.799656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:02.616 [2024-07-11 18:35:48.799683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:29:02.616 [2024-07-11 18:35:48.799697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.799821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.616 [2024-07-11 18:35:48.799853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:02.616 [2024-07-11 18:35:48.799867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:29:02.616 [2024-07-11 18:35:48.799880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.616 [2024-07-11 18:35:48.800020] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:02.616 [2024-07-11 18:35:48.800042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:02.616 [2024-07-11 18:35:48.800056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.616 [2024-07-11 18:35:48.800070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:02.616 [2024-07-11 18:35:48.800124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800139] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:02.616 [2024-07-11 18:35:48.800152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:02.616 [2024-07-11 18:35:48.800165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.616 [2024-07-11 18:35:48.800189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:02.616 [2024-07-11 18:35:48.800201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:02.616 [2024-07-11 18:35:48.800215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:02.616 [2024-07-11 18:35:48.800228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:02.616 [2024-07-11 18:35:48.800241] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:02.616 [2024-07-11 18:35:48.800253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:02.616 [2024-07-11 18:35:48.800277] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:02.616 [2024-07-11 18:35:48.800289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:02.616 [2024-07-11 18:35:48.800316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.616 [2024-07-11 18:35:48.800344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:02.616 [2024-07-11 18:35:48.800356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.616 [2024-07-11 18:35:48.800380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:02.616 [2024-07-11 18:35:48.800392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.616 [2024-07-11 18:35:48.800416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:02.616 [2024-07-11 18:35:48.800428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:02.616 [2024-07-11 18:35:48.800452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:02.616 [2024-07-11 18:35:48.800465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:02.616 [2024-07-11 18:35:48.800476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.616 [2024-07-11 18:35:48.800489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:02.616 [2024-07-11 18:35:48.800501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:02.616 [2024-07-11 18:35:48.800512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:02.616 [2024-07-11 18:35:48.800530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:02.616 [2024-07-11 18:35:48.800544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:02.616 [2024-07-11 18:35:48.800556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.617 [2024-07-11 18:35:48.800568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:02.617 [2024-07-11 18:35:48.800580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:02.617 [2024-07-11 18:35:48.800592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.617 [2024-07-11 18:35:48.800604] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:02.617 [2024-07-11 18:35:48.800623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:02.617 [2024-07-11 18:35:48.800636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:02.617 [2024-07-11 18:35:48.800649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:02.617 [2024-07-11 18:35:48.800662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:02.617 [2024-07-11 18:35:48.800675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:02.617 [2024-07-11 18:35:48.800688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:02.617 [2024-07-11 18:35:48.800700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:02.617 [2024-07-11 18:35:48.800712] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:02.617 [2024-07-11 18:35:48.800724] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:02.617 [2024-07-11 18:35:48.800742] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:02.617 [2024-07-11 18:35:48.800771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.617 [2024-07-11 18:35:48.800787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:02.617 [2024-07-11 18:35:48.800801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:02.617 [2024-07-11 18:35:48.800815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:02.617 [2024-07-11 18:35:48.800828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:02.617 [2024-07-11 18:35:48.800842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:02.617 [2024-07-11 18:35:48.800855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:02.617 [2024-07-11 18:35:48.800868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:02.617 [2024-07-11 18:35:48.800882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:02.617 [2024-07-11 18:35:48.800896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:02.617 [2024-07-11 18:35:48.800910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:02.617 [2024-07-11 18:35:48.800923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:02.617 [2024-07-11 18:35:48.800937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:02.617 [2024-07-11 18:35:48.800950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:02.617 [2024-07-11 18:35:48.800964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:02.617 [2024-07-11 18:35:48.800981] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:02.617 [2024-07-11 18:35:48.800998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:02.617 [2024-07-11 18:35:48.801012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:02.617 [2024-07-11 18:35:48.801026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:02.617 [2024-07-11 18:35:48.801053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:02.617 [2024-07-11 18:35:48.801067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:02.617 [2024-07-11 18:35:48.801098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.801114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:02.617 [2024-07-11 18:35:48.801141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:29:02.617 [2024-07-11 18:35:48.801165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.819570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.819640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:02.617 [2024-07-11 18:35:48.819671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.314 ms 00:29:02.617 [2024-07-11 18:35:48.819717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.819906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.819943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:02.617 [2024-07-11 18:35:48.819967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:29:02.617 [2024-07-11 18:35:48.819988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.832826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.832877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:02.617 [2024-07-11 18:35:48.832899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.684 ms 00:29:02.617 [2024-07-11 18:35:48.832936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.833004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.833023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:02.617 [2024-07-11 18:35:48.833039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:02.617 [2024-07-11 18:35:48.833052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.833243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.833268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:02.617 [2024-07-11 18:35:48.833299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:29:02.617 [2024-07-11 18:35:48.833313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.833498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.833522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:02.617 [2024-07-11 18:35:48.833537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:29:02.617 [2024-07-11 18:35:48.833549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.839555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.839605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:02.617 [2024-07-11 18:35:48.839625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.971 ms 00:29:02.617 [2024-07-11 18:35:48.839640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.839814] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:02.617 [2024-07-11 18:35:48.839844] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:02.617 [2024-07-11 18:35:48.839861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.839875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:02.617 [2024-07-11 18:35:48.839896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:02.617 [2024-07-11 18:35:48.839909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.857039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.857076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:02.617 [2024-07-11 18:35:48.857105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.102 ms 00:29:02.617 [2024-07-11 18:35:48.857119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.857286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.857305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:02.617 [2024-07-11 18:35:48.857326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:29:02.617 [2024-07-11 18:35:48.857351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.857440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.857462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:02.617 [2024-07-11 18:35:48.857476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:02.617 [2024-07-11 18:35:48.857490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.857926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.857949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:02.617 [2024-07-11 18:35:48.857977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:29:02.617 [2024-07-11 18:35:48.857991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.858023] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:02.617 [2024-07-11 18:35:48.858052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.858069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:02.617 [2024-07-11 18:35:48.858115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:02.617 [2024-07-11 18:35:48.858131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.868966] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:02.617 [2024-07-11 18:35:48.869229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.869265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:02.617 [2024-07-11 18:35:48.869287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.065 ms 00:29:02.617 [2024-07-11 18:35:48.869304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.872238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.872275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:02.617 [2024-07-11 18:35:48.872292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:29:02.617 [2024-07-11 18:35:48.872305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.872432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.872455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:02.617 [2024-07-11 18:35:48.872475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:02.617 [2024-07-11 18:35:48.872488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.617 [2024-07-11 18:35:48.872527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.617 [2024-07-11 18:35:48.872545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:02.618 [2024-07-11 18:35:48.872559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:02.618 [2024-07-11 18:35:48.872572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.618 [2024-07-11 18:35:48.872623] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:02.618 [2024-07-11 18:35:48.872643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.618 [2024-07-11 18:35:48.872656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:02.618 [2024-07-11 18:35:48.872670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:02.618 [2024-07-11 18:35:48.872691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.618 [2024-07-11 18:35:48.877205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.618 [2024-07-11 18:35:48.877256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:02.618 [2024-07-11 18:35:48.877276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.480 ms 00:29:02.618 [2024-07-11 18:35:48.877291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.618 [2024-07-11 18:35:48.877388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.618 [2024-07-11 18:35:48.877412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:02.618 [2024-07-11 18:35:48.877442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:02.618 [2024-07-11 18:35:48.877455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.618 [2024-07-11 18:35:48.878872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 83.570 ms, result 0 00:29:46.733  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (24 MBps) Copying: 167/1024 [MB] (24 MBps) Copying: 191/1024 [MB] (23 MBps) Copying: 215/1024 [MB] (24 MBps) Copying: 239/1024 [MB] (23 MBps) Copying: 262/1024 [MB] (23 MBps) Copying: 286/1024 [MB] (23 MBps) Copying: 310/1024 [MB] (24 MBps) Copying: 334/1024 [MB] (23 MBps) Copying: 358/1024 [MB] (23 MBps) Copying: 381/1024 [MB] (23 MBps) Copying: 404/1024 [MB] (23 MBps) Copying: 428/1024 [MB] (23 MBps) Copying: 451/1024 [MB] (23 MBps) Copying: 474/1024 [MB] (22 MBps) Copying: 498/1024 [MB] (23 MBps) Copying: 522/1024 [MB] (24 MBps) Copying: 546/1024 [MB] (24 MBps) Copying: 570/1024 [MB] (23 MBps) Copying: 594/1024 [MB] (24 MBps) Copying: 617/1024 [MB] (23 MBps) Copying: 641/1024 [MB] (24 MBps) Copying: 666/1024 [MB] (24 MBps) Copying: 690/1024 [MB] (24 MBps) Copying: 714/1024 [MB] (24 MBps) Copying: 738/1024 [MB] (23 MBps) Copying: 762/1024 [MB] (23 MBps) Copying: 786/1024 [MB] (24 MBps) Copying: 810/1024 [MB] (23 MBps) Copying: 833/1024 [MB] (23 MBps) Copying: 857/1024 [MB] (23 MBps) Copying: 881/1024 [MB] (23 MBps) Copying: 905/1024 [MB] (23 MBps) Copying: 929/1024 [MB] (24 MBps) Copying: 953/1024 [MB] (24 MBps) Copying: 977/1024 [MB] (24 MBps) Copying: 1001/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 18:36:32.895576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.733 [2024-07-11 18:36:32.895839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:46.733 [2024-07-11 18:36:32.895872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:46.733 [2024-07-11 18:36:32.895886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.733 [2024-07-11 18:36:32.898941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:46.733 [2024-07-11 18:36:32.901246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.733 [2024-07-11 18:36:32.901332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:46.733 [2024-07-11 18:36:32.901375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:29:46.733 [2024-07-11 18:36:32.901414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.733 [2024-07-11 18:36:32.910654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.733 [2024-07-11 18:36:32.910691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:46.733 [2024-07-11 18:36:32.910720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.064 ms 00:29:46.733 [2024-07-11 18:36:32.910730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.733 [2024-07-11 18:36:32.910773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.733 [2024-07-11 18:36:32.910787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:46.733 [2024-07-11 18:36:32.910797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:46.733 [2024-07-11 18:36:32.910817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.733 [2024-07-11 18:36:32.910875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.733 [2024-07-11 18:36:32.910888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:46.733 [2024-07-11 18:36:32.910898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:46.733 [2024-07-11 18:36:32.910915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.733 [2024-07-11 18:36:32.910932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:46.733 [2024-07-11 18:36:32.910961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130304 / 261120 wr_cnt: 1 state: open 00:29:46.733 [2024-07-11 18:36:32.910989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.910999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:46.733 [2024-07-11 18:36:32.911450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.911995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.912005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.912014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.912024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.912034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.912043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.912064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:46.734 [2024-07-11 18:36:32.912081] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:46.734 [2024-07-11 18:36:32.912098] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8dbb777d-c192-43ab-944a-cae90757a7a8 00:29:46.734 [2024-07-11 18:36:32.912108] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130304 00:29:46.734 [2024-07-11 18:36:32.912117] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130336 00:29:46.734 [2024-07-11 18:36:32.912138] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130304 00:29:46.734 [2024-07-11 18:36:32.912152] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:29:46.734 [2024-07-11 18:36:32.912161] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:46.734 [2024-07-11 18:36:32.912171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:46.734 [2024-07-11 18:36:32.912179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:46.734 [2024-07-11 18:36:32.912188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:46.734 [2024-07-11 18:36:32.912196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:46.734 [2024-07-11 18:36:32.912205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.734 [2024-07-11 18:36:32.912215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:46.734 [2024-07-11 18:36:32.912225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:29:46.734 [2024-07-11 18:36:32.912235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.734 [2024-07-11 18:36:32.913493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.734 [2024-07-11 18:36:32.913556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:46.734 [2024-07-11 18:36:32.913569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:29:46.734 [2024-07-11 18:36:32.913579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.734 [2024-07-11 18:36:32.913654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.734 [2024-07-11 18:36:32.913667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:46.734 [2024-07-11 18:36:32.913681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:46.734 [2024-07-11 18:36:32.913690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.917666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.917713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.735 [2024-07-11 18:36:32.917726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.917735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.917785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.917798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.735 [2024-07-11 18:36:32.917808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.917817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.917863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.917883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.735 [2024-07-11 18:36:32.917893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.917902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.917920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.917947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.735 [2024-07-11 18:36:32.917957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.917966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.925075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.925145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.735 [2024-07-11 18:36:32.925175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.925185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.931742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.931811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.735 [2024-07-11 18:36:32.931841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.931851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.931919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.931949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.735 [2024-07-11 18:36:32.931964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.931973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.931998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.932010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.735 [2024-07-11 18:36:32.932019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.932029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.932244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.932261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.735 [2024-07-11 18:36:32.932277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.932287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.932328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.932349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:46.735 [2024-07-11 18:36:32.932363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.932373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.932421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.932451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.735 [2024-07-11 18:36:32.932462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.932476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.932523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.735 [2024-07-11 18:36:32.932538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.735 [2024-07-11 18:36:32.932548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.735 [2024-07-11 18:36:32.932557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.735 [2024-07-11 18:36:32.932682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.552 ms, result 0 00:29:47.683 00:29:47.683 00:29:47.683 18:36:33 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:47.683 [2024-07-11 18:36:33.849417] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:29:47.683 [2024-07-11 18:36:33.849629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98156 ] 00:29:47.683 [2024-07-11 18:36:33.995412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.683 [2024-07-11 18:36:34.031711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.965 [2024-07-11 18:36:34.115326] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:47.965 [2024-07-11 18:36:34.115431] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:47.965 [2024-07-11 18:36:34.269244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.269288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:47.965 [2024-07-11 18:36:34.269330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:47.965 [2024-07-11 18:36:34.269340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.965 [2024-07-11 18:36:34.269403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.269421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:47.965 [2024-07-11 18:36:34.269435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:47.965 [2024-07-11 18:36:34.269444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.965 [2024-07-11 18:36:34.269478] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:47.965 [2024-07-11 18:36:34.269745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:47.965 [2024-07-11 18:36:34.269790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.269802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:47.965 [2024-07-11 18:36:34.269821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:29:47.965 [2024-07-11 18:36:34.269831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.965 [2024-07-11 18:36:34.270288] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:47.965 [2024-07-11 18:36:34.270350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.270363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:47.965 [2024-07-11 18:36:34.270388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:47.965 [2024-07-11 18:36:34.270398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.965 [2024-07-11 18:36:34.270450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.270465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:47.965 [2024-07-11 18:36:34.270475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:47.965 [2024-07-11 18:36:34.270485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.965 [2024-07-11 18:36:34.270829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.270857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:47.965 [2024-07-11 18:36:34.270874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:29:47.965 [2024-07-11 18:36:34.270886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.965 [2024-07-11 18:36:34.270974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.271005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:47.965 [2024-07-11 18:36:34.271017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:47.965 [2024-07-11 18:36:34.271026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.965 [2024-07-11 18:36:34.271064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.965 [2024-07-11 18:36:34.271112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:47.966 [2024-07-11 18:36:34.271126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:47.966 [2024-07-11 18:36:34.271136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.966 [2024-07-11 18:36:34.271167] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:47.966 [2024-07-11 18:36:34.272642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.966 [2024-07-11 18:36:34.272687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:47.966 [2024-07-11 18:36:34.272727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:29:47.966 [2024-07-11 18:36:34.272737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.966 [2024-07-11 18:36:34.272779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.966 [2024-07-11 18:36:34.272798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:47.966 [2024-07-11 18:36:34.272817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:47.966 [2024-07-11 18:36:34.272830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.966 [2024-07-11 18:36:34.272861] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:47.966 [2024-07-11 18:36:34.272884] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:47.966 [2024-07-11 18:36:34.272932] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:47.966 [2024-07-11 18:36:34.272953] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:47.966 [2024-07-11 18:36:34.273040] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:47.966 [2024-07-11 18:36:34.273054] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:47.966 [2024-07-11 18:36:34.273079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:47.966 [2024-07-11 18:36:34.273104] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273126] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273137] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:47.966 [2024-07-11 18:36:34.273147] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:47.966 [2024-07-11 18:36:34.273157] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:47.966 [2024-07-11 18:36:34.273167] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:47.966 [2024-07-11 18:36:34.273180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.966 [2024-07-11 18:36:34.273200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:47.966 [2024-07-11 18:36:34.273210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:29:47.966 [2024-07-11 18:36:34.273220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.966 [2024-07-11 18:36:34.273299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.966 [2024-07-11 18:36:34.273311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:47.966 [2024-07-11 18:36:34.273321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:47.966 [2024-07-11 18:36:34.273330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.966 [2024-07-11 18:36:34.273450] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:47.966 [2024-07-11 18:36:34.273470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:47.966 [2024-07-11 18:36:34.273493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:47.966 [2024-07-11 18:36:34.273523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:47.966 [2024-07-11 18:36:34.273555] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.966 [2024-07-11 18:36:34.273573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:47.966 [2024-07-11 18:36:34.273582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:47.966 [2024-07-11 18:36:34.273591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:47.966 [2024-07-11 18:36:34.273600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:47.966 [2024-07-11 18:36:34.273610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:47.966 [2024-07-11 18:36:34.273619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:47.966 [2024-07-11 18:36:34.273638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:47.966 [2024-07-11 18:36:34.273665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:47.966 [2024-07-11 18:36:34.273695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273704] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:47.966 [2024-07-11 18:36:34.273723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:47.966 [2024-07-11 18:36:34.273749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:47.966 [2024-07-11 18:36:34.273776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.966 [2024-07-11 18:36:34.273793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:47.966 [2024-07-11 18:36:34.273802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:47.966 [2024-07-11 18:36:34.273811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:47.966 [2024-07-11 18:36:34.273820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:47.966 [2024-07-11 18:36:34.273829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:47.966 [2024-07-11 18:36:34.273840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:47.966 [2024-07-11 18:36:34.273859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:47.966 [2024-07-11 18:36:34.273867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273876] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:47.966 [2024-07-11 18:36:34.273889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:47.966 [2024-07-11 18:36:34.273900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:47.966 [2024-07-11 18:36:34.273909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:47.966 [2024-07-11 18:36:34.273927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:47.966 [2024-07-11 18:36:34.273938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:47.966 [2024-07-11 18:36:34.273948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:47.966 [2024-07-11 18:36:34.273957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:47.966 [2024-07-11 18:36:34.273966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:47.967 [2024-07-11 18:36:34.273975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:47.967 [2024-07-11 18:36:34.273986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:47.967 [2024-07-11 18:36:34.273998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.967 [2024-07-11 18:36:34.274011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:47.967 [2024-07-11 18:36:34.274023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:47.967 [2024-07-11 18:36:34.274033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:47.967 [2024-07-11 18:36:34.274043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:47.967 [2024-07-11 18:36:34.274052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:47.967 [2024-07-11 18:36:34.274062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:47.967 [2024-07-11 18:36:34.274073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:47.967 [2024-07-11 18:36:34.274119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:47.967 [2024-07-11 18:36:34.274131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:47.967 [2024-07-11 18:36:34.274141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:47.967 [2024-07-11 18:36:34.274151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:47.967 [2024-07-11 18:36:34.274161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:47.967 [2024-07-11 18:36:34.274171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:47.967 [2024-07-11 18:36:34.274182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:47.967 [2024-07-11 18:36:34.274192] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:47.967 [2024-07-11 18:36:34.274203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:47.967 [2024-07-11 18:36:34.274227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:47.967 [2024-07-11 18:36:34.274238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:47.967 [2024-07-11 18:36:34.274257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:47.967 [2024-07-11 18:36:34.274268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:47.967 [2024-07-11 18:36:34.274279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.274289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:47.967 [2024-07-11 18:36:34.274300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:29:47.967 [2024-07-11 18:36:34.274310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.288946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.289007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:47.967 [2024-07-11 18:36:34.289043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.581 ms 00:29:47.967 [2024-07-11 18:36:34.289055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.289217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.289241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:47.967 [2024-07-11 18:36:34.289274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:47.967 [2024-07-11 18:36:34.289289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.297789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.297839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:47.967 [2024-07-11 18:36:34.297873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.430 ms 00:29:47.967 [2024-07-11 18:36:34.297888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.297940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.297968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:47.967 [2024-07-11 18:36:34.297985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:47.967 [2024-07-11 18:36:34.297998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.298166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.298189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:47.967 [2024-07-11 18:36:34.298205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:29:47.967 [2024-07-11 18:36:34.298232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.298420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.298454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:47.967 [2024-07-11 18:36:34.298482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:29:47.967 [2024-07-11 18:36:34.298495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.303697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.303756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:47.967 [2024-07-11 18:36:34.303775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.168 ms 00:29:47.967 [2024-07-11 18:36:34.303802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.303982] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:47.967 [2024-07-11 18:36:34.304033] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:47.967 [2024-07-11 18:36:34.304046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.304057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:47.967 [2024-07-11 18:36:34.304072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:29:47.967 [2024-07-11 18:36:34.304081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.315131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.315159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:47.967 [2024-07-11 18:36:34.315187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.017 ms 00:29:47.967 [2024-07-11 18:36:34.315197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.315294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.315308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:47.967 [2024-07-11 18:36:34.315329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:47.967 [2024-07-11 18:36:34.315346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.315431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.315456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:47.967 [2024-07-11 18:36:34.315481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:47.967 [2024-07-11 18:36:34.315491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.315863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.315920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:47.967 [2024-07-11 18:36:34.315933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:29:47.967 [2024-07-11 18:36:34.315943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.315963] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:47.967 [2024-07-11 18:36:34.315981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.316005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:47.967 [2024-07-11 18:36:34.316025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:47.967 [2024-07-11 18:36:34.316038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.323181] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:47.967 [2024-07-11 18:36:34.323363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.323389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:47.967 [2024-07-11 18:36:34.323404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.302 ms 00:29:47.967 [2024-07-11 18:36:34.323413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.325571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.967 [2024-07-11 18:36:34.325615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:47.967 [2024-07-11 18:36:34.325643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:29:47.967 [2024-07-11 18:36:34.325663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.967 [2024-07-11 18:36:34.325718] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:29:47.968 [2024-07-11 18:36:34.326358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.968 [2024-07-11 18:36:34.326402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:47.968 [2024-07-11 18:36:34.326430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:29:47.968 [2024-07-11 18:36:34.326455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.968 [2024-07-11 18:36:34.326506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.968 [2024-07-11 18:36:34.326521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:47.968 [2024-07-11 18:36:34.326532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:47.968 [2024-07-11 18:36:34.326552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.968 [2024-07-11 18:36:34.326598] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:47.968 [2024-07-11 18:36:34.326613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.968 [2024-07-11 18:36:34.326623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:47.968 [2024-07-11 18:36:34.326637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:47.968 [2024-07-11 18:36:34.326646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.968 [2024-07-11 18:36:34.330234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.968 [2024-07-11 18:36:34.330290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:47.968 [2024-07-11 18:36:34.330305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.567 ms 00:29:47.968 [2024-07-11 18:36:34.330315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.968 [2024-07-11 18:36:34.330381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:47.968 [2024-07-11 18:36:34.330397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:47.968 [2024-07-11 18:36:34.330408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:47.968 [2024-07-11 18:36:34.330430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:47.968 [2024-07-11 18:36:34.340496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.198 ms, result 0 00:30:32.640  Copying: 25/1024 [MB] (25 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (22 MBps) Copying: 93/1024 [MB] (22 MBps) Copying: 116/1024 [MB] (22 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 163/1024 [MB] (22 MBps) Copying: 185/1024 [MB] (22 MBps) Copying: 208/1024 [MB] (22 MBps) Copying: 230/1024 [MB] (22 MBps) Copying: 253/1024 [MB] (22 MBps) Copying: 276/1024 [MB] (22 MBps) Copying: 299/1024 [MB] (23 MBps) Copying: 322/1024 [MB] (23 MBps) Copying: 345/1024 [MB] (23 MBps) Copying: 368/1024 [MB] (22 MBps) Copying: 392/1024 [MB] (23 MBps) Copying: 415/1024 [MB] (23 MBps) Copying: 438/1024 [MB] (23 MBps) Copying: 461/1024 [MB] (23 MBps) Copying: 484/1024 [MB] (23 MBps) Copying: 507/1024 [MB] (22 MBps) Copying: 530/1024 [MB] (22 MBps) Copying: 553/1024 [MB] (23 MBps) Copying: 576/1024 [MB] (23 MBps) Copying: 599/1024 [MB] (22 MBps) Copying: 621/1024 [MB] (22 MBps) Copying: 645/1024 [MB] (23 MBps) Copying: 668/1024 [MB] (22 MBps) Copying: 691/1024 [MB] (23 MBps) Copying: 714/1024 [MB] (22 MBps) Copying: 737/1024 [MB] (23 MBps) Copying: 761/1024 [MB] (23 MBps) Copying: 784/1024 [MB] (23 MBps) Copying: 808/1024 [MB] (23 MBps) Copying: 830/1024 [MB] (22 MBps) Copying: 853/1024 [MB] (22 MBps) Copying: 877/1024 [MB] (23 MBps) Copying: 900/1024 [MB] (23 MBps) Copying: 924/1024 [MB] (23 MBps) Copying: 947/1024 [MB] (23 MBps) Copying: 971/1024 [MB] (23 MBps) Copying: 994/1024 [MB] (23 MBps) Copying: 1018/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-11 18:37:18.896802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.640 [2024-07-11 18:37:18.896873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:32.640 [2024-07-11 18:37:18.896906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:32.640 [2024-07-11 18:37:18.896918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.640 [2024-07-11 18:37:18.896945] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:32.640 [2024-07-11 18:37:18.897463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.640 [2024-07-11 18:37:18.897492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:32.640 [2024-07-11 18:37:18.897512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:30:32.640 [2024-07-11 18:37:18.897533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.640 [2024-07-11 18:37:18.897788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.640 [2024-07-11 18:37:18.897825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:32.640 [2024-07-11 18:37:18.897837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:30:32.640 [2024-07-11 18:37:18.897848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.640 [2024-07-11 18:37:18.897881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.640 [2024-07-11 18:37:18.897895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:32.640 [2024-07-11 18:37:18.897907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:32.640 [2024-07-11 18:37:18.897923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.640 [2024-07-11 18:37:18.897978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.640 [2024-07-11 18:37:18.897996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:32.640 [2024-07-11 18:37:18.898022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:32.640 [2024-07-11 18:37:18.898042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.640 [2024-07-11 18:37:18.898077] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:32.640 [2024-07-11 18:37:18.898125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:30:32.640 [2024-07-11 18:37:18.898154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:32.640 [2024-07-11 18:37:18.898443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.898998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.899141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:32.641 [2024-07-11 18:37:18.900572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:32.641 [2024-07-11 18:37:18.900583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8dbb777d-c192-43ab-944a-cae90757a7a8 00:30:32.641 [2024-07-11 18:37:18.900595] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:30:32.641 [2024-07-11 18:37:18.900606] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3616 00:30:32.641 [2024-07-11 18:37:18.900621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3584 00:30:32.641 [2024-07-11 18:37:18.900632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:30:32.641 [2024-07-11 18:37:18.900643] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:32.641 [2024-07-11 18:37:18.900654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:32.641 [2024-07-11 18:37:18.900664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:32.641 [2024-07-11 18:37:18.900674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:32.641 [2024-07-11 18:37:18.900686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:32.641 [2024-07-11 18:37:18.900698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.641 [2024-07-11 18:37:18.900709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:32.641 [2024-07-11 18:37:18.900729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:30:32.641 [2024-07-11 18:37:18.900740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.902086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.641 [2024-07-11 18:37:18.902139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:32.641 [2024-07-11 18:37:18.902153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:30:32.641 [2024-07-11 18:37:18.902177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.902252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.641 [2024-07-11 18:37:18.902266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:32.641 [2024-07-11 18:37:18.902293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:32.641 [2024-07-11 18:37:18.902319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.907217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.907261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:32.641 [2024-07-11 18:37:18.907275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.907285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.907341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.907355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:32.641 [2024-07-11 18:37:18.907382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.907408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.907523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.907541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:32.641 [2024-07-11 18:37:18.907551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.907562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.907582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.907621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:32.641 [2024-07-11 18:37:18.907650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.907661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.915299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.915348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:32.641 [2024-07-11 18:37:18.915364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.915375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.922765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.922807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:32.641 [2024-07-11 18:37:18.922838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.922847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.922893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.922910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:32.641 [2024-07-11 18:37:18.922920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.922929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.922982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.922995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:32.641 [2024-07-11 18:37:18.923015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.923039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.923147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.923238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:32.641 [2024-07-11 18:37:18.923256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.923267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.923308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.923330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:32.641 [2024-07-11 18:37:18.923342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.923353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.923408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.923424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:32.641 [2024-07-11 18:37:18.923440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.923450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.923510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:32.641 [2024-07-11 18:37:18.923539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:32.641 [2024-07-11 18:37:18.923551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:32.641 [2024-07-11 18:37:18.923560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.641 [2024-07-11 18:37:18.923768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 26.914 ms, result 0 00:30:32.900 00:30:32.900 00:30:32.900 18:37:19 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:34.804 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:34.804 18:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:34.804 18:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:30:34.804 18:37:20 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96619 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 96619 ']' 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 96619 00:30:34.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96619) - No such process 00:30:34.804 Process with pid 96619 is not found 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 96619 is not found' 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:30:34.804 Remove shared memory files 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_band_md /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_l2p_l1 /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_l2p_l2 /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_l2p_l2_ctx /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_nvc_md /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_p2l_pool /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_sb /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_sb_shm /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_trim_bitmap /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_trim_log /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_trim_md /dev/hugepages/ftl_8dbb777d-c192-43ab-944a-cae90757a7a8_vmap 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:30:34.804 00:30:34.804 real 3m22.383s 00:30:34.804 user 3m10.611s 00:30:34.804 sys 0m12.977s 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:34.804 18:37:21 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:34.804 ************************************ 00:30:34.804 END TEST ftl_restore_fast 00:30:34.804 ************************************ 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@1142 -- # return 0 00:30:34.804 18:37:21 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:34.804 18:37:21 ftl -- ftl/ftl.sh@14 -- # killprocess 89192 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@948 -- # '[' -z 89192 ']' 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@952 -- # kill -0 89192 00:30:34.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (89192) - No such process 00:30:34.804 Process with pid 89192 is not found 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 89192 is not found' 00:30:34.804 18:37:21 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:34.804 18:37:21 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98639 00:30:34.804 18:37:21 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:34.804 18:37:21 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98639 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@829 -- # '[' -z 98639 ']' 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:34.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:34.804 18:37:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:34.804 [2024-07-11 18:37:21.197548] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:34.804 [2024-07-11 18:37:21.197732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98639 ] 00:30:35.063 [2024-07-11 18:37:21.339906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.063 [2024-07-11 18:37:21.381693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.628 18:37:22 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:35.628 18:37:22 ftl -- common/autotest_common.sh@862 -- # return 0 00:30:35.628 18:37:22 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:36.194 nvme0n1 00:30:36.194 18:37:22 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:36.194 18:37:22 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:36.194 18:37:22 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:36.194 18:37:22 ftl -- ftl/common.sh@28 -- # stores=ce3c104c-dab7-4a06-b6b9-4698d4ae0d19 00:30:36.194 18:37:22 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:36.194 18:37:22 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce3c104c-dab7-4a06-b6b9-4698d4ae0d19 00:30:36.453 18:37:22 ftl -- ftl/ftl.sh@23 -- # killprocess 98639 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@948 -- # '[' -z 98639 ']' 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@952 -- # kill -0 98639 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@953 -- # uname 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98639 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:36.453 killing process with pid 98639 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98639' 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@967 -- # kill 98639 00:30:36.453 18:37:22 ftl -- common/autotest_common.sh@972 -- # wait 98639 00:30:37.019 18:37:23 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:37.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:37.019 Waiting for block devices as requested 00:30:37.019 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:37.276 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:37.276 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:37.276 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:42.546 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:42.546 18:37:28 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:42.546 18:37:28 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:42.546 Remove shared memory files 00:30:42.546 18:37:28 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:42.546 18:37:28 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:42.546 18:37:28 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:42.546 18:37:28 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:42.546 18:37:28 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:42.546 00:30:42.546 real 14m21.546s 00:30:42.546 user 16m47.018s 00:30:42.546 sys 1m35.253s 00:30:42.546 18:37:28 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:42.546 ************************************ 00:30:42.546 END TEST ftl 00:30:42.546 ************************************ 00:30:42.546 18:37:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:42.546 18:37:28 -- common/autotest_common.sh@1142 -- # return 0 00:30:42.546 18:37:28 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:42.546 18:37:28 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:42.546 18:37:28 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:42.546 18:37:28 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:42.546 18:37:28 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:42.546 18:37:28 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:42.546 18:37:28 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:42.546 18:37:28 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:42.546 18:37:28 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:42.546 18:37:28 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:42.546 18:37:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:42.546 18:37:28 -- common/autotest_common.sh@10 -- # set +x 00:30:42.546 18:37:28 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:42.546 18:37:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:42.546 18:37:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:42.546 18:37:28 -- common/autotest_common.sh@10 -- # set +x 00:30:43.925 INFO: APP EXITING 00:30:43.925 INFO: killing all VMs 00:30:43.925 INFO: killing vhost app 00:30:43.925 INFO: EXIT DONE 00:30:44.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:44.752 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:44.752 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:44.752 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:44.752 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:45.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.579 Cleaning 00:30:45.579 Removing: /var/run/dpdk/spdk0/config 00:30:45.579 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:45.579 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:45.579 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:45.579 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:45.579 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:45.579 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:45.579 Removing: /var/run/dpdk/spdk0 00:30:45.579 Removing: /var/run/dpdk/spdk_pid73830 00:30:45.579 Removing: /var/run/dpdk/spdk_pid73980 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74179 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74261 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74290 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74401 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74418 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74572 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74638 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74715 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74801 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74874 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74908 00:30:45.579 Removing: /var/run/dpdk/spdk_pid74944 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75007 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75104 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75543 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75590 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75637 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75653 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75716 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75732 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75796 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75812 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75859 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75877 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75919 00:30:45.579 Removing: /var/run/dpdk/spdk_pid75937 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76062 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76093 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76174 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76222 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76253 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76309 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76350 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76380 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76410 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76451 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76481 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76517 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76552 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76583 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76623 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76653 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76689 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76724 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76760 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76794 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76825 00:30:45.579 Removing: /var/run/dpdk/spdk_pid76861 00:30:45.838 Removing: /var/run/dpdk/spdk_pid76899 00:30:45.838 Removing: /var/run/dpdk/spdk_pid76938 00:30:45.838 Removing: /var/run/dpdk/spdk_pid76968 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77010 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77070 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77164 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77309 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77382 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77413 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77858 00:30:45.838 Removing: /var/run/dpdk/spdk_pid77940 00:30:45.838 Removing: /var/run/dpdk/spdk_pid78038 00:30:45.838 Removing: /var/run/dpdk/spdk_pid78080 00:30:45.838 Removing: /var/run/dpdk/spdk_pid78111 00:30:45.838 Removing: /var/run/dpdk/spdk_pid78181 00:30:45.838 Removing: /var/run/dpdk/spdk_pid78782 00:30:45.838 Removing: /var/run/dpdk/spdk_pid78813 00:30:45.838 Removing: /var/run/dpdk/spdk_pid79294 00:30:45.838 Removing: /var/run/dpdk/spdk_pid79376 00:30:45.838 Removing: /var/run/dpdk/spdk_pid79478 00:30:45.838 Removing: /var/run/dpdk/spdk_pid79516 00:30:45.838 Removing: /var/run/dpdk/spdk_pid79547 00:30:45.838 Removing: /var/run/dpdk/spdk_pid79567 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81373 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81499 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81503 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81515 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81554 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81558 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81570 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81615 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81619 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81631 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81670 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81674 00:30:45.838 Removing: /var/run/dpdk/spdk_pid81686 00:30:45.838 Removing: /var/run/dpdk/spdk_pid83033 00:30:45.838 Removing: /var/run/dpdk/spdk_pid83111 00:30:45.838 Removing: /var/run/dpdk/spdk_pid84495 00:30:45.838 Removing: /var/run/dpdk/spdk_pid85828 00:30:45.838 Removing: /var/run/dpdk/spdk_pid85904 00:30:45.838 Removing: /var/run/dpdk/spdk_pid85987 00:30:45.838 Removing: /var/run/dpdk/spdk_pid86058 00:30:45.838 Removing: /var/run/dpdk/spdk_pid86155 00:30:45.838 Removing: /var/run/dpdk/spdk_pid86218 00:30:45.838 Removing: /var/run/dpdk/spdk_pid86348 00:30:45.838 Removing: /var/run/dpdk/spdk_pid86692 00:30:45.838 Removing: /var/run/dpdk/spdk_pid86723 00:30:45.838 Removing: /var/run/dpdk/spdk_pid87163 00:30:45.838 Removing: /var/run/dpdk/spdk_pid87345 00:30:45.838 Removing: /var/run/dpdk/spdk_pid87428 00:30:45.838 Removing: /var/run/dpdk/spdk_pid87527 00:30:45.839 Removing: /var/run/dpdk/spdk_pid87564 00:30:45.839 Removing: /var/run/dpdk/spdk_pid87588 00:30:45.839 Removing: /var/run/dpdk/spdk_pid87866 00:30:45.839 Removing: /var/run/dpdk/spdk_pid87894 00:30:45.839 Removing: /var/run/dpdk/spdk_pid87950 00:30:45.839 Removing: /var/run/dpdk/spdk_pid88284 00:30:45.839 Removing: /var/run/dpdk/spdk_pid88428 00:30:45.839 Removing: /var/run/dpdk/spdk_pid89192 00:30:45.839 Removing: /var/run/dpdk/spdk_pid89311 00:30:45.839 Removing: /var/run/dpdk/spdk_pid89476 00:30:45.839 Removing: /var/run/dpdk/spdk_pid89564 00:30:45.839 Removing: /var/run/dpdk/spdk_pid89918 00:30:45.839 Removing: /var/run/dpdk/spdk_pid90177 00:30:45.839 Removing: /var/run/dpdk/spdk_pid90516 00:30:45.839 Removing: /var/run/dpdk/spdk_pid90686 00:30:45.839 Removing: /var/run/dpdk/spdk_pid90805 00:30:45.839 Removing: /var/run/dpdk/spdk_pid90847 00:30:45.839 Removing: /var/run/dpdk/spdk_pid90975 00:30:45.839 Removing: /var/run/dpdk/spdk_pid90989 00:30:45.839 Removing: /var/run/dpdk/spdk_pid91030 00:30:45.839 Removing: /var/run/dpdk/spdk_pid91221 00:30:45.839 Removing: /var/run/dpdk/spdk_pid91430 00:30:45.839 Removing: /var/run/dpdk/spdk_pid91883 00:30:45.839 Removing: /var/run/dpdk/spdk_pid92338 00:30:45.839 Removing: /var/run/dpdk/spdk_pid92790 00:30:45.839 Removing: /var/run/dpdk/spdk_pid93308 00:30:45.839 Removing: /var/run/dpdk/spdk_pid93439 00:30:45.839 Removing: /var/run/dpdk/spdk_pid93532 00:30:45.839 Removing: /var/run/dpdk/spdk_pid94200 00:30:45.839 Removing: /var/run/dpdk/spdk_pid94259 00:30:45.839 Removing: /var/run/dpdk/spdk_pid94754 00:30:46.098 Removing: /var/run/dpdk/spdk_pid95179 00:30:46.098 Removing: /var/run/dpdk/spdk_pid95719 00:30:46.098 Removing: /var/run/dpdk/spdk_pid95841 00:30:46.098 Removing: /var/run/dpdk/spdk_pid95872 00:30:46.098 Removing: /var/run/dpdk/spdk_pid95925 00:30:46.098 Removing: /var/run/dpdk/spdk_pid95981 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96034 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96209 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96267 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96316 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96391 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96414 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96481 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96619 00:30:46.098 Removing: /var/run/dpdk/spdk_pid96812 00:30:46.098 Removing: /var/run/dpdk/spdk_pid97245 00:30:46.098 Removing: /var/run/dpdk/spdk_pid97723 00:30:46.098 Removing: /var/run/dpdk/spdk_pid98156 00:30:46.098 Removing: /var/run/dpdk/spdk_pid98639 00:30:46.098 Clean 00:30:46.098 18:37:32 -- common/autotest_common.sh@1451 -- # return 0 00:30:46.098 18:37:32 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:46.098 18:37:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.098 18:37:32 -- common/autotest_common.sh@10 -- # set +x 00:30:46.098 18:37:32 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:46.098 18:37:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.098 18:37:32 -- common/autotest_common.sh@10 -- # set +x 00:30:46.098 18:37:32 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:46.098 18:37:32 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:46.098 18:37:32 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:46.098 18:37:32 -- spdk/autotest.sh@391 -- # hash lcov 00:30:46.098 18:37:32 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:46.098 18:37:32 -- spdk/autotest.sh@393 -- # hostname 00:30:46.098 18:37:32 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:46.362 geninfo: WARNING: invalid characters removed from testname! 00:31:08.305 18:37:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:10.208 18:37:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:12.743 18:37:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:15.279 18:38:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:17.176 18:38:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:19.707 18:38:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:22.239 18:38:08 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:22.239 18:38:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:22.239 18:38:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:22.239 18:38:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.239 18:38:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.239 18:38:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.239 18:38:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.239 18:38:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.239 18:38:08 -- paths/export.sh@5 -- $ export PATH 00:31:22.239 18:38:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.239 18:38:08 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:22.239 18:38:08 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:22.239 18:38:08 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720723088.XXXXXX 00:31:22.239 18:38:08 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720723088.idqAV0 00:31:22.239 18:38:08 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:22.239 18:38:08 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:31:22.239 18:38:08 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:31:22.239 18:38:08 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:31:22.239 18:38:08 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:22.239 18:38:08 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:22.239 18:38:08 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:22.239 18:38:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:22.239 18:38:08 -- common/autotest_common.sh@10 -- $ set +x 00:31:22.239 18:38:08 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:31:22.239 18:38:08 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:22.239 18:38:08 -- pm/common@17 -- $ local monitor 00:31:22.239 18:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:22.239 18:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:22.239 18:38:08 -- pm/common@25 -- $ sleep 1 00:31:22.239 18:38:08 -- pm/common@21 -- $ date +%s 00:31:22.239 18:38:08 -- pm/common@21 -- $ date +%s 00:31:22.239 18:38:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720723088 00:31:22.239 18:38:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720723088 00:31:22.239 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720723088_collect-vmstat.pm.log 00:31:22.239 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720723088_collect-cpu-load.pm.log 00:31:23.191 18:38:09 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:23.191 18:38:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:23.191 18:38:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:23.191 18:38:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:23.191 18:38:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:23.191 18:38:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:23.191 18:38:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:23.191 18:38:09 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:23.191 18:38:09 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:23.191 18:38:09 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:23.191 18:38:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:23.191 18:38:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:23.191 18:38:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:23.191 18:38:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:23.191 18:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.191 18:38:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:23.191 18:38:09 -- pm/common@44 -- $ pid=100319 00:31:23.191 18:38:09 -- pm/common@50 -- $ kill -TERM 100319 00:31:23.191 18:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:23.191 18:38:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:23.191 18:38:09 -- pm/common@44 -- $ pid=100320 00:31:23.191 18:38:09 -- pm/common@50 -- $ kill -TERM 100320 00:31:23.191 + [[ -n 5947 ]] 00:31:23.191 + sudo kill 5947 00:31:23.201 [Pipeline] } 00:31:23.219 [Pipeline] // timeout 00:31:23.225 [Pipeline] } 00:31:23.242 [Pipeline] // stage 00:31:23.247 [Pipeline] } 00:31:23.264 [Pipeline] // catchError 00:31:23.273 [Pipeline] stage 00:31:23.275 [Pipeline] { (Stop VM) 00:31:23.293 [Pipeline] sh 00:31:23.572 + vagrant halt 00:31:26.106 ==> default: Halting domain... 00:31:32.686 [Pipeline] sh 00:31:32.965 + vagrant destroy -f 00:31:35.502 ==> default: Removing domain... 00:31:36.082 [Pipeline] sh 00:31:36.445 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:36.461 [Pipeline] } 00:31:36.479 [Pipeline] // stage 00:31:36.484 [Pipeline] } 00:31:36.502 [Pipeline] // dir 00:31:36.507 [Pipeline] } 00:31:36.524 [Pipeline] // wrap 00:31:36.531 [Pipeline] } 00:31:36.546 [Pipeline] // catchError 00:31:36.556 [Pipeline] stage 00:31:36.558 [Pipeline] { (Epilogue) 00:31:36.573 [Pipeline] sh 00:31:36.853 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:42.133 [Pipeline] catchError 00:31:42.135 [Pipeline] { 00:31:42.152 [Pipeline] sh 00:31:42.437 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:42.695 Artifacts sizes are good 00:31:42.704 [Pipeline] } 00:31:42.720 [Pipeline] // catchError 00:31:42.730 [Pipeline] archiveArtifacts 00:31:42.737 Archiving artifacts 00:31:42.889 [Pipeline] cleanWs 00:31:42.900 [WS-CLEANUP] Deleting project workspace... 00:31:42.900 [WS-CLEANUP] Deferred wipeout is used... 00:31:42.905 [WS-CLEANUP] done 00:31:42.907 [Pipeline] } 00:31:42.924 [Pipeline] // stage 00:31:42.929 [Pipeline] } 00:31:42.944 [Pipeline] // node 00:31:42.949 [Pipeline] End of Pipeline 00:31:42.983 Finished: SUCCESS